I0701 10:50:16.734717 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0701 10:50:16.734925 7 e2e.go:124] Starting e2e run "8a1527b4-9ada-482e-88f5-fefb873032fb" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1593600615 - Will randomize all specs Will run 275 of 4992 specs Jul 1 10:50:16.785: INFO: >>> kubeConfig: /root/.kube/config Jul 1 10:50:16.789: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 1 10:50:16.819: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 1 10:50:16.857: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 1 10:50:16.857: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 1 10:50:16.857: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 1 10:50:16.866: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 1 10:50:16.866: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 1 10:50:16.866: INFO: e2e test version: v1.18.2 Jul 1 10:50:16.867: INFO: kube-apiserver version: v1.18.2 Jul 1 10:50:16.867: INFO: >>> kubeConfig: /root/.kube/config Jul 1 10:50:16.872: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:50:16.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns Jul 1 10:50:16.994: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5445 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5445;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5445 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5445;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5445.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5445.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5445.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5445.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5445.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5445.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5445.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5445.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5445.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5445.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5445.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5445.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5445.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 178.151.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.151.178_udp@PTR;check="$$(dig +tcp +noall +answer +search 178.151.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.151.178_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5445 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5445;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5445 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5445;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5445.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5445.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5445.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5445.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5445.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5445.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5445.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5445.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5445.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5445.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5445.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5445.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5445.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 178.151.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.151.178_udp@PTR;check="$$(dig +tcp +noall +answer +search 178.151.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.151.178_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 10:50:25.242: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:25.245: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:25.247: INFO: Unable to read wheezy_udp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:25.250: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:25.252: INFO: Unable to read wheezy_udp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:25.255: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:25.258: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:25.261: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:25.278: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:25.281: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:25.284: INFO: Unable to read jessie_udp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:25.287: INFO: Unable to read jessie_tcp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:25.290: INFO: Unable to read jessie_udp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:25.293: INFO: Unable to read jessie_tcp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:25.296: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:25.299: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:25.318: INFO: Lookups using dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5445 wheezy_tcp@dns-test-service.dns-5445 wheezy_udp@dns-test-service.dns-5445.svc wheezy_tcp@dns-test-service.dns-5445.svc wheezy_udp@_http._tcp.dns-test-service.dns-5445.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5445.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5445 jessie_tcp@dns-test-service.dns-5445 jessie_udp@dns-test-service.dns-5445.svc jessie_tcp@dns-test-service.dns-5445.svc jessie_udp@_http._tcp.dns-test-service.dns-5445.svc jessie_tcp@_http._tcp.dns-test-service.dns-5445.svc] Jul 1 10:50:30.323: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:30.327: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:30.331: INFO: Unable to read wheezy_udp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:30.334: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:30.336: INFO: Unable to read wheezy_udp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:30.340: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:30.342: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:30.345: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:30.383: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:30.385: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:30.388: INFO: Unable to read jessie_udp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:30.390: INFO: Unable to read jessie_tcp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:30.393: INFO: Unable to read jessie_udp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:30.396: INFO: Unable to read jessie_tcp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:30.398: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:30.401: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:30.420: INFO: Lookups using dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5445 wheezy_tcp@dns-test-service.dns-5445 wheezy_udp@dns-test-service.dns-5445.svc wheezy_tcp@dns-test-service.dns-5445.svc wheezy_udp@_http._tcp.dns-test-service.dns-5445.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5445.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5445 jessie_tcp@dns-test-service.dns-5445 jessie_udp@dns-test-service.dns-5445.svc jessie_tcp@dns-test-service.dns-5445.svc jessie_udp@_http._tcp.dns-test-service.dns-5445.svc jessie_tcp@_http._tcp.dns-test-service.dns-5445.svc] Jul 1 10:50:35.323: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:35.327: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:35.331: INFO: Unable to read wheezy_udp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:35.335: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:35.338: INFO: Unable to read wheezy_udp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:35.341: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:35.344: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:35.348: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:35.368: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:35.371: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:35.373: INFO: Unable to read jessie_udp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:35.376: INFO: Unable to read jessie_tcp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:35.378: INFO: Unable to read jessie_udp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:35.381: INFO: Unable to read jessie_tcp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:35.384: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:35.387: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:35.405: INFO: Lookups using dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5445 wheezy_tcp@dns-test-service.dns-5445 wheezy_udp@dns-test-service.dns-5445.svc wheezy_tcp@dns-test-service.dns-5445.svc wheezy_udp@_http._tcp.dns-test-service.dns-5445.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5445.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5445 jessie_tcp@dns-test-service.dns-5445 jessie_udp@dns-test-service.dns-5445.svc jessie_tcp@dns-test-service.dns-5445.svc jessie_udp@_http._tcp.dns-test-service.dns-5445.svc jessie_tcp@_http._tcp.dns-test-service.dns-5445.svc] Jul 1 10:50:40.323: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:40.328: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:40.332: INFO: Unable to read wheezy_udp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:40.335: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:40.339: INFO: Unable to read wheezy_udp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:40.343: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:40.347: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:40.350: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:40.371: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:40.374: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:40.377: INFO: Unable to read jessie_udp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:40.381: INFO: Unable to read jessie_tcp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:40.384: INFO: Unable to read jessie_udp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:40.387: INFO: Unable to read jessie_tcp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:40.389: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:40.392: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:40.410: INFO: Lookups using dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5445 wheezy_tcp@dns-test-service.dns-5445 wheezy_udp@dns-test-service.dns-5445.svc wheezy_tcp@dns-test-service.dns-5445.svc wheezy_udp@_http._tcp.dns-test-service.dns-5445.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5445.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5445 jessie_tcp@dns-test-service.dns-5445 jessie_udp@dns-test-service.dns-5445.svc jessie_tcp@dns-test-service.dns-5445.svc jessie_udp@_http._tcp.dns-test-service.dns-5445.svc jessie_tcp@_http._tcp.dns-test-service.dns-5445.svc] Jul 1 10:50:45.323: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:45.327: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:45.345: INFO: Unable to read wheezy_udp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:45.348: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:45.351: INFO: Unable to read wheezy_udp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:45.354: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:45.356: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:45.358: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:45.378: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:45.380: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:45.383: INFO: Unable to read jessie_udp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:45.385: INFO: Unable to read jessie_tcp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:45.388: INFO: Unable to read jessie_udp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:45.391: INFO: Unable to read jessie_tcp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:45.393: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:45.395: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:45.414: INFO: Lookups using dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5445 wheezy_tcp@dns-test-service.dns-5445 wheezy_udp@dns-test-service.dns-5445.svc wheezy_tcp@dns-test-service.dns-5445.svc wheezy_udp@_http._tcp.dns-test-service.dns-5445.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5445.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5445 jessie_tcp@dns-test-service.dns-5445 jessie_udp@dns-test-service.dns-5445.svc jessie_tcp@dns-test-service.dns-5445.svc jessie_udp@_http._tcp.dns-test-service.dns-5445.svc jessie_tcp@_http._tcp.dns-test-service.dns-5445.svc] Jul 1 10:50:50.324: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:50.327: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:50.330: INFO: Unable to read wheezy_udp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:50.332: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:50.335: INFO: Unable to read wheezy_udp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:50.338: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:50.342: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:50.355: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:50.377: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:50.380: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:50.383: INFO: Unable to read jessie_udp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:50.386: INFO: Unable to read jessie_tcp@dns-test-service.dns-5445 from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:50.389: INFO: Unable to read jessie_udp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:50.393: INFO: Unable to read jessie_tcp@dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:50.396: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:50.399: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5445.svc from pod dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e: the server could not find the requested resource (get pods dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e) Jul 1 10:50:50.451: INFO: Lookups using dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5445 wheezy_tcp@dns-test-service.dns-5445 wheezy_udp@dns-test-service.dns-5445.svc wheezy_tcp@dns-test-service.dns-5445.svc wheezy_udp@_http._tcp.dns-test-service.dns-5445.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5445.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5445 jessie_tcp@dns-test-service.dns-5445 jessie_udp@dns-test-service.dns-5445.svc jessie_tcp@dns-test-service.dns-5445.svc jessie_udp@_http._tcp.dns-test-service.dns-5445.svc jessie_tcp@_http._tcp.dns-test-service.dns-5445.svc] Jul 1 10:50:55.405: INFO: DNS probes using dns-5445/dns-test-a418ef96-f9bb-48db-a3f6-70c5ae5c645e succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:50:55.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5445" for this suite. • [SLOW TEST:39.082 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":1,"skipped":19,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:50:55.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7726 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7726 STEP: creating replication controller externalsvc in namespace services-7726 I0701 10:50:56.257993 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7726, replica count: 2 I0701 10:50:59.308426 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 10:51:02.308729 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jul 1 10:51:02.423: INFO: Creating new exec pod Jul 1 10:51:06.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-7726 execpodx7m45 -- /bin/sh -x -c nslookup nodeport-service' Jul 1 10:51:09.831: INFO: stderr: "I0701 10:51:09.710452 31 log.go:172] (0xc0000e1130) (0xc000695680) Create stream\nI0701 10:51:09.710511 31 log.go:172] (0xc0000e1130) (0xc000695680) Stream added, broadcasting: 1\nI0701 10:51:09.713720 31 log.go:172] (0xc0000e1130) Reply frame received for 1\nI0701 10:51:09.713756 31 log.go:172] (0xc0000e1130) (0xc00065a000) Create stream\nI0701 10:51:09.713766 31 log.go:172] (0xc0000e1130) (0xc00065a000) Stream added, broadcasting: 3\nI0701 10:51:09.714677 31 log.go:172] (0xc0000e1130) Reply frame received for 3\nI0701 10:51:09.714711 31 log.go:172] (0xc0000e1130) (0xc000674000) Create stream\nI0701 10:51:09.714720 31 log.go:172] (0xc0000e1130) (0xc000674000) Stream added, broadcasting: 5\nI0701 10:51:09.715566 31 log.go:172] (0xc0000e1130) Reply frame received for 5\nI0701 10:51:09.799751 31 log.go:172] (0xc0000e1130) Data frame received for 5\nI0701 10:51:09.799779 31 log.go:172] (0xc000674000) (5) Data frame handling\nI0701 10:51:09.799799 31 log.go:172] (0xc000674000) (5) Data frame sent\n+ nslookup nodeport-service\nI0701 10:51:09.821844 31 log.go:172] (0xc0000e1130) Data frame received for 3\nI0701 10:51:09.821884 31 log.go:172] (0xc00065a000) (3) Data frame handling\nI0701 10:51:09.821905 31 log.go:172] (0xc00065a000) (3) Data frame sent\nI0701 10:51:09.822820 31 log.go:172] (0xc0000e1130) Data frame received for 3\nI0701 10:51:09.822838 31 log.go:172] (0xc00065a000) (3) Data frame handling\nI0701 10:51:09.822854 31 log.go:172] (0xc00065a000) (3) Data frame sent\nI0701 10:51:09.823439 31 log.go:172] (0xc0000e1130) Data frame received for 5\nI0701 10:51:09.823465 31 log.go:172] (0xc000674000) (5) Data frame handling\nI0701 10:51:09.823483 31 log.go:172] (0xc0000e1130) Data frame received for 3\nI0701 10:51:09.823490 31 log.go:172] (0xc00065a000) (3) Data frame handling\nI0701 10:51:09.825523 31 log.go:172] (0xc0000e1130) Data frame received for 1\nI0701 10:51:09.825551 31 log.go:172] (0xc000695680) (1) Data frame handling\nI0701 10:51:09.825580 31 log.go:172] (0xc000695680) (1) Data frame sent\nI0701 10:51:09.825698 31 log.go:172] (0xc0000e1130) (0xc000695680) Stream removed, broadcasting: 1\nI0701 10:51:09.825724 31 log.go:172] (0xc0000e1130) Go away received\nI0701 10:51:09.826127 31 log.go:172] (0xc0000e1130) (0xc000695680) Stream removed, broadcasting: 1\nI0701 10:51:09.826148 31 log.go:172] (0xc0000e1130) (0xc00065a000) Stream removed, broadcasting: 3\nI0701 10:51:09.826158 31 log.go:172] (0xc0000e1130) (0xc000674000) Stream removed, broadcasting: 5\n" Jul 1 10:51:09.831: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7726.svc.cluster.local\tcanonical name = externalsvc.services-7726.svc.cluster.local.\nName:\texternalsvc.services-7726.svc.cluster.local\nAddress: 10.106.21.129\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7726, will wait for the garbage collector to delete the pods Jul 1 10:51:09.890: INFO: Deleting ReplicationController externalsvc took: 5.156918ms Jul 1 10:51:10.190: INFO: Terminating ReplicationController externalsvc pods took: 300.287277ms Jul 1 10:51:23.891: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:51:23.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7726" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:27.967 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":2,"skipped":23,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:51:23.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0701 10:51:25.146535 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 10:51:25.146: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:51:25.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8887" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":3,"skipped":24,"failed":0} ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:51:25.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jul 1 10:51:25.408: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9c39fdaa-c9b1-4ba8-a704-c51d802db8f1" in namespace "downward-api-9375" to be "Succeeded or Failed" Jul 1 10:51:25.439: INFO: Pod "downwardapi-volume-9c39fdaa-c9b1-4ba8-a704-c51d802db8f1": Phase="Pending", Reason="", readiness=false. Elapsed: 31.315345ms Jul 1 10:51:27.443: INFO: Pod "downwardapi-volume-9c39fdaa-c9b1-4ba8-a704-c51d802db8f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035794999s Jul 1 10:51:29.447: INFO: Pod "downwardapi-volume-9c39fdaa-c9b1-4ba8-a704-c51d802db8f1": Phase="Running", Reason="", readiness=true. Elapsed: 4.038843011s Jul 1 10:51:31.451: INFO: Pod "downwardapi-volume-9c39fdaa-c9b1-4ba8-a704-c51d802db8f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043464734s STEP: Saw pod success Jul 1 10:51:31.451: INFO: Pod "downwardapi-volume-9c39fdaa-c9b1-4ba8-a704-c51d802db8f1" satisfied condition "Succeeded or Failed" Jul 1 10:51:31.454: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-9c39fdaa-c9b1-4ba8-a704-c51d802db8f1 container client-container: STEP: delete the pod Jul 1 10:51:31.508: INFO: Waiting for pod downwardapi-volume-9c39fdaa-c9b1-4ba8-a704-c51d802db8f1 to disappear Jul 1 10:51:31.526: INFO: Pod downwardapi-volume-9c39fdaa-c9b1-4ba8-a704-c51d802db8f1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:51:31.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9375" for this suite. • [SLOW TEST:6.380 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":4,"skipped":24,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:51:31.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 1 10:51:36.928: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:51:36.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3456" for this suite. • [SLOW TEST:5.464 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":5,"skipped":30,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:51:37.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jul 1 10:51:37.256: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7106 /api/v1/namespaces/watch-7106/configmaps/e2e-watch-test-configmap-a b0fb9568-92ce-4852-acbb-10ff58c360c4 16781725 0 2020-07-01 10:51:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-01 10:51:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 1 10:51:37.257: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7106 /api/v1/namespaces/watch-7106/configmaps/e2e-watch-test-configmap-a b0fb9568-92ce-4852-acbb-10ff58c360c4 16781725 0 2020-07-01 10:51:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-01 10:51:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jul 1 10:51:47.263: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7106 /api/v1/namespaces/watch-7106/configmaps/e2e-watch-test-configmap-a b0fb9568-92ce-4852-acbb-10ff58c360c4 16781766 0 2020-07-01 10:51:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-01 10:51:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 1 10:51:47.263: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7106 /api/v1/namespaces/watch-7106/configmaps/e2e-watch-test-configmap-a b0fb9568-92ce-4852-acbb-10ff58c360c4 16781766 0 2020-07-01 10:51:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-01 10:51:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jul 1 10:51:57.283: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7106 /api/v1/namespaces/watch-7106/configmaps/e2e-watch-test-configmap-a b0fb9568-92ce-4852-acbb-10ff58c360c4 16781796 0 2020-07-01 10:51:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-01 10:51:57 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 1 10:51:57.283: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7106 /api/v1/namespaces/watch-7106/configmaps/e2e-watch-test-configmap-a b0fb9568-92ce-4852-acbb-10ff58c360c4 16781796 0 2020-07-01 10:51:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-01 10:51:57 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jul 1 10:52:07.290: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7106 /api/v1/namespaces/watch-7106/configmaps/e2e-watch-test-configmap-a b0fb9568-92ce-4852-acbb-10ff58c360c4 16781826 0 2020-07-01 10:51:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-01 10:51:57 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 1 10:52:07.291: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7106 /api/v1/namespaces/watch-7106/configmaps/e2e-watch-test-configmap-a b0fb9568-92ce-4852-acbb-10ff58c360c4 16781826 0 2020-07-01 10:51:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-01 10:51:57 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jul 1 10:52:17.300: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7106 /api/v1/namespaces/watch-7106/configmaps/e2e-watch-test-configmap-b 4930bcff-d183-4cca-80b2-e3c3d639bee0 16781856 0 2020-07-01 10:52:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-01 10:52:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 1 10:52:17.300: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7106 /api/v1/namespaces/watch-7106/configmaps/e2e-watch-test-configmap-b 4930bcff-d183-4cca-80b2-e3c3d639bee0 16781856 0 2020-07-01 10:52:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-01 10:52:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jul 1 10:52:27.307: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7106 /api/v1/namespaces/watch-7106/configmaps/e2e-watch-test-configmap-b 4930bcff-d183-4cca-80b2-e3c3d639bee0 16781885 0 2020-07-01 10:52:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-01 10:52:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 1 10:52:27.307: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7106 /api/v1/namespaces/watch-7106/configmaps/e2e-watch-test-configmap-b 4930bcff-d183-4cca-80b2-e3c3d639bee0 16781885 0 2020-07-01 10:52:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-01 10:52:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:52:37.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7106" for this suite. • [SLOW TEST:60.515 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":6,"skipped":43,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:52:37.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 1 10:52:37.836: INFO: Create a RollingUpdate DaemonSet Jul 1 10:52:37.879: INFO: Check that daemon pods launch on every node of the cluster Jul 1 10:52:37.892: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 10:52:37.986: INFO: Number of nodes with available pods: 0 Jul 1 10:52:37.986: INFO: Node kali-worker is running more than one daemon pod Jul 1 10:52:38.991: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 10:52:38.995: INFO: Number of nodes with available pods: 0 Jul 1 10:52:38.995: INFO: Node kali-worker is running more than one daemon pod Jul 1 10:52:40.000: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 10:52:40.054: INFO: Number of nodes with available pods: 0 Jul 1 10:52:40.054: INFO: Node kali-worker is running more than one daemon pod Jul 1 10:52:41.155: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 10:52:41.173: INFO: Number of nodes with available pods: 0 Jul 1 10:52:41.173: INFO: Node kali-worker is running more than one daemon pod Jul 1 10:52:42.021: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 10:52:42.024: INFO: Number of nodes with available pods: 0 Jul 1 10:52:42.024: INFO: Node kali-worker is running more than one daemon pod Jul 1 10:52:42.991: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 10:52:42.994: INFO: Number of nodes with available pods: 2 Jul 1 10:52:42.994: INFO: Number of running nodes: 2, number of available pods: 2 Jul 1 10:52:42.994: INFO: Update the DaemonSet to trigger a rollout Jul 1 10:52:43.000: INFO: Updating DaemonSet daemon-set Jul 1 10:52:54.020: INFO: Roll back the DaemonSet before rollout is complete Jul 1 10:52:54.025: INFO: Updating DaemonSet daemon-set Jul 1 10:52:54.025: INFO: Make sure DaemonSet rollback is complete Jul 1 10:52:54.031: INFO: Wrong image for pod: daemon-set-nlg9h. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jul 1 10:52:54.032: INFO: Pod daemon-set-nlg9h is not available Jul 1 10:52:54.051: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 10:52:55.056: INFO: Wrong image for pod: daemon-set-nlg9h. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jul 1 10:52:55.056: INFO: Pod daemon-set-nlg9h is not available Jul 1 10:52:55.060: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 10:52:56.055: INFO: Wrong image for pod: daemon-set-nlg9h. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jul 1 10:52:56.055: INFO: Pod daemon-set-nlg9h is not available Jul 1 10:52:56.059: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 10:52:57.056: INFO: Wrong image for pod: daemon-set-nlg9h. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jul 1 10:52:57.056: INFO: Pod daemon-set-nlg9h is not available Jul 1 10:52:57.059: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 10:52:58.056: INFO: Pod daemon-set-z4hsf is not available Jul 1 10:52:58.061: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1973, will wait for the garbage collector to delete the pods Jul 1 10:52:58.128: INFO: Deleting DaemonSet.extensions daemon-set took: 6.840063ms Jul 1 10:52:58.528: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.238103ms Jul 1 10:53:03.832: INFO: Number of nodes with available pods: 0 Jul 1 10:53:03.832: INFO: Number of running nodes: 0, number of available pods: 0 Jul 1 10:53:03.837: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1973/daemonsets","resourceVersion":"16782068"},"items":null} Jul 1 10:53:03.839: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1973/pods","resourceVersion":"16782068"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:53:03.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1973" for this suite. • [SLOW TEST:26.393 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":7,"skipped":63,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:53:03.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-5deecf9e-5f24-4181-9961-6218e35e8d63 STEP: Creating a pod to test consume secrets Jul 1 10:53:04.023: INFO: Waiting up to 5m0s for pod "pod-secrets-410b1fd7-9823-4cb9-b217-07221b20f12e" in namespace "secrets-3472" to be "Succeeded or Failed" Jul 1 10:53:04.026: INFO: Pod "pod-secrets-410b1fd7-9823-4cb9-b217-07221b20f12e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.68965ms Jul 1 10:53:06.124: INFO: Pod "pod-secrets-410b1fd7-9823-4cb9-b217-07221b20f12e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100561043s Jul 1 10:53:08.128: INFO: Pod "pod-secrets-410b1fd7-9823-4cb9-b217-07221b20f12e": Phase="Running", Reason="", readiness=true. Elapsed: 4.104216255s Jul 1 10:53:10.132: INFO: Pod "pod-secrets-410b1fd7-9823-4cb9-b217-07221b20f12e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.108125436s STEP: Saw pod success Jul 1 10:53:10.132: INFO: Pod "pod-secrets-410b1fd7-9823-4cb9-b217-07221b20f12e" satisfied condition "Succeeded or Failed" Jul 1 10:53:10.134: INFO: Trying to get logs from node kali-worker pod pod-secrets-410b1fd7-9823-4cb9-b217-07221b20f12e container secret-volume-test: STEP: delete the pod Jul 1 10:53:10.212: INFO: Waiting for pod pod-secrets-410b1fd7-9823-4cb9-b217-07221b20f12e to disappear Jul 1 10:53:10.217: INFO: Pod pod-secrets-410b1fd7-9823-4cb9-b217-07221b20f12e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:53:10.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3472" for this suite. • [SLOW TEST:6.318 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":64,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:53:10.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 10:53:10.958: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 10:53:13.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729197591, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729197591, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729197591, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729197590, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 10:53:16.084: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 1 10:53:16.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2386-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:53:17.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3743" for this suite. STEP: Destroying namespace "webhook-3743-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.196 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":9,"skipped":66,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:53:17.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-6d37c49c-2121-4d1c-a27c-a7db40c12fbd in namespace container-probe-7561 Jul 1 10:53:21.533: INFO: Started pod test-webserver-6d37c49c-2121-4d1c-a27c-a7db40c12fbd in namespace container-probe-7561 STEP: checking the pod's current state and verifying that restartCount is present Jul 1 10:53:21.537: INFO: Initial restart count of pod test-webserver-6d37c49c-2121-4d1c-a27c-a7db40c12fbd is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:57:22.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7561" for this suite. • [SLOW TEST:245.053 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":94,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:57:22.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Jul 1 10:57:22.952: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:57:30.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3342" for this suite. • [SLOW TEST:7.810 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":11,"skipped":111,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:57:30.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jul 1 10:57:30.343: INFO: Waiting up to 5m0s for pod "downwardapi-volume-43ffa2c1-68f8-4ed6-b478-e51d59f7c452" in namespace "downward-api-7336" to be "Succeeded or Failed" Jul 1 10:57:30.346: INFO: Pod "downwardapi-volume-43ffa2c1-68f8-4ed6-b478-e51d59f7c452": Phase="Pending", Reason="", readiness=false. Elapsed: 2.550379ms Jul 1 10:57:32.378: INFO: Pod "downwardapi-volume-43ffa2c1-68f8-4ed6-b478-e51d59f7c452": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034127461s Jul 1 10:57:34.381: INFO: Pod "downwardapi-volume-43ffa2c1-68f8-4ed6-b478-e51d59f7c452": Phase="Running", Reason="", readiness=true. Elapsed: 4.03784155s Jul 1 10:57:36.385: INFO: Pod "downwardapi-volume-43ffa2c1-68f8-4ed6-b478-e51d59f7c452": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041425539s STEP: Saw pod success Jul 1 10:57:36.385: INFO: Pod "downwardapi-volume-43ffa2c1-68f8-4ed6-b478-e51d59f7c452" satisfied condition "Succeeded or Failed" Jul 1 10:57:36.388: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-43ffa2c1-68f8-4ed6-b478-e51d59f7c452 container client-container: STEP: delete the pod Jul 1 10:57:36.457: INFO: Waiting for pod downwardapi-volume-43ffa2c1-68f8-4ed6-b478-e51d59f7c452 to disappear Jul 1 10:57:36.470: INFO: Pod downwardapi-volume-43ffa2c1-68f8-4ed6-b478-e51d59f7c452 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:57:36.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7336" for this suite. • [SLOW TEST:6.191 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":12,"skipped":130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:57:36.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 10:57:37.259: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 10:57:39.270: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729197857, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729197857, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729197857, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729197857, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 10:57:42.342: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:57:54.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7162" for this suite. STEP: Destroying namespace "webhook-7162-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.198 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":13,"skipped":155,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:57:54.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 1 10:57:54.780: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:57:55.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9149" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":14,"skipped":171,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:57:55.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:57:56.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-162" for this suite. STEP: Destroying namespace "nspatchtest-83c09597-45d8-4bba-a678-cb8307f867e8-7040" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":15,"skipped":204,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:57:56.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4703 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-4703 I0701 10:57:56.386720 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4703, replica count: 2 I0701 10:57:59.437207 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 10:58:02.437434 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 1 10:58:02.437: INFO: Creating new exec pod Jul 1 10:58:07.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4703 execpod67zcq -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jul 1 10:58:07.759: INFO: stderr: "I0701 10:58:07.593677 59 log.go:172] (0xc0009cab00) (0xc0005db720) Create stream\nI0701 10:58:07.593747 59 log.go:172] (0xc0009cab00) (0xc0005db720) Stream added, broadcasting: 1\nI0701 10:58:07.596190 59 log.go:172] (0xc0009cab00) Reply frame received for 1\nI0701 10:58:07.596259 59 log.go:172] (0xc0009cab00) (0xc000996000) Create stream\nI0701 10:58:07.596299 59 log.go:172] (0xc0009cab00) (0xc000996000) Stream added, broadcasting: 3\nI0701 10:58:07.597053 59 log.go:172] (0xc0009cab00) Reply frame received for 3\nI0701 10:58:07.597090 59 log.go:172] (0xc0009cab00) (0xc0004e9680) Create stream\nI0701 10:58:07.597264 59 log.go:172] (0xc0009cab00) (0xc0004e9680) Stream added, broadcasting: 5\nI0701 10:58:07.598183 59 log.go:172] (0xc0009cab00) Reply frame received for 5\nI0701 10:58:07.693948 59 log.go:172] (0xc0009cab00) Data frame received for 5\nI0701 10:58:07.693992 59 log.go:172] (0xc0004e9680) (5) Data frame handling\nI0701 10:58:07.694023 59 log.go:172] (0xc0004e9680) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0701 10:58:07.752987 59 log.go:172] (0xc0009cab00) Data frame received for 5\nI0701 10:58:07.753008 59 log.go:172] (0xc0004e9680) (5) Data frame handling\nI0701 10:58:07.753027 59 log.go:172] (0xc0004e9680) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0701 10:58:07.753889 59 log.go:172] (0xc0009cab00) Data frame received for 5\nI0701 10:58:07.753934 59 log.go:172] (0xc0004e9680) (5) Data frame handling\nI0701 10:58:07.753971 59 log.go:172] (0xc0009cab00) Data frame received for 3\nI0701 10:58:07.753995 59 log.go:172] (0xc000996000) (3) Data frame handling\nI0701 10:58:07.755189 59 log.go:172] (0xc0009cab00) Data frame received for 1\nI0701 10:58:07.755208 59 log.go:172] (0xc0005db720) (1) Data frame handling\nI0701 10:58:07.755221 59 log.go:172] (0xc0005db720) (1) Data frame sent\nI0701 10:58:07.755234 59 log.go:172] (0xc0009cab00) (0xc0005db720) Stream removed, broadcasting: 1\nI0701 10:58:07.755252 59 log.go:172] (0xc0009cab00) Go away received\nI0701 10:58:07.755552 59 log.go:172] (0xc0009cab00) (0xc0005db720) Stream removed, broadcasting: 1\nI0701 10:58:07.755566 59 log.go:172] (0xc0009cab00) (0xc000996000) Stream removed, broadcasting: 3\nI0701 10:58:07.755572 59 log.go:172] (0xc0009cab00) (0xc0004e9680) Stream removed, broadcasting: 5\n" Jul 1 10:58:07.760: INFO: stdout: "" Jul 1 10:58:07.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4703 execpod67zcq -- /bin/sh -x -c nc -zv -t -w 2 10.99.179.173 80' Jul 1 10:58:07.974: INFO: stderr: "I0701 10:58:07.896393 78 log.go:172] (0xc000913a20) (0xc0008f4960) Create stream\nI0701 10:58:07.896475 78 log.go:172] (0xc000913a20) (0xc0008f4960) Stream added, broadcasting: 1\nI0701 10:58:07.902614 78 log.go:172] (0xc000913a20) Reply frame received for 1\nI0701 10:58:07.902647 78 log.go:172] (0xc000913a20) (0xc0006797c0) Create stream\nI0701 10:58:07.902655 78 log.go:172] (0xc000913a20) (0xc0006797c0) Stream added, broadcasting: 3\nI0701 10:58:07.903499 78 log.go:172] (0xc000913a20) Reply frame received for 3\nI0701 10:58:07.903546 78 log.go:172] (0xc000913a20) (0xc0004b8be0) Create stream\nI0701 10:58:07.903559 78 log.go:172] (0xc000913a20) (0xc0004b8be0) Stream added, broadcasting: 5\nI0701 10:58:07.904283 78 log.go:172] (0xc000913a20) Reply frame received for 5\nI0701 10:58:07.964638 78 log.go:172] (0xc000913a20) Data frame received for 5\nI0701 10:58:07.964664 78 log.go:172] (0xc0004b8be0) (5) Data frame handling\nI0701 10:58:07.964686 78 log.go:172] (0xc0004b8be0) (5) Data frame sent\nI0701 10:58:07.964701 78 log.go:172] (0xc000913a20) Data frame received for 5\nI0701 10:58:07.964713 78 log.go:172] (0xc0004b8be0) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.179.173 80\nConnection to 10.99.179.173 80 port [tcp/http] succeeded!\nI0701 10:58:07.964735 78 log.go:172] (0xc0004b8be0) (5) Data frame sent\nI0701 10:58:07.964984 78 log.go:172] (0xc000913a20) Data frame received for 3\nI0701 10:58:07.965010 78 log.go:172] (0xc0006797c0) (3) Data frame handling\nI0701 10:58:07.965041 78 log.go:172] (0xc000913a20) Data frame received for 5\nI0701 10:58:07.965082 78 log.go:172] (0xc0004b8be0) (5) Data frame handling\nI0701 10:58:07.966899 78 log.go:172] (0xc000913a20) Data frame received for 1\nI0701 10:58:07.966939 78 log.go:172] (0xc0008f4960) (1) Data frame handling\nI0701 10:58:07.966995 78 log.go:172] (0xc0008f4960) (1) Data frame sent\nI0701 10:58:07.967033 78 log.go:172] (0xc000913a20) (0xc0008f4960) Stream removed, broadcasting: 1\nI0701 10:58:07.967076 78 log.go:172] (0xc000913a20) Go away received\nI0701 10:58:07.967389 78 log.go:172] (0xc000913a20) (0xc0008f4960) Stream removed, broadcasting: 1\nI0701 10:58:07.967409 78 log.go:172] (0xc000913a20) (0xc0006797c0) Stream removed, broadcasting: 3\nI0701 10:58:07.967419 78 log.go:172] (0xc000913a20) (0xc0004b8be0) Stream removed, broadcasting: 5\n" Jul 1 10:58:07.974: INFO: stdout: "" Jul 1 10:58:07.974: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4703 execpod67zcq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 30314' Jul 1 10:58:08.174: INFO: stderr: "I0701 10:58:08.101492 100 log.go:172] (0xc000b558c0) (0xc000a428c0) Create stream\nI0701 10:58:08.101550 100 log.go:172] (0xc000b558c0) (0xc000a428c0) Stream added, broadcasting: 1\nI0701 10:58:08.107167 100 log.go:172] (0xc000b558c0) Reply frame received for 1\nI0701 10:58:08.107245 100 log.go:172] (0xc000b558c0) (0xc0005e5540) Create stream\nI0701 10:58:08.107264 100 log.go:172] (0xc000b558c0) (0xc0005e5540) Stream added, broadcasting: 3\nI0701 10:58:08.108310 100 log.go:172] (0xc000b558c0) Reply frame received for 3\nI0701 10:58:08.108333 100 log.go:172] (0xc000b558c0) (0xc000500960) Create stream\nI0701 10:58:08.108340 100 log.go:172] (0xc000b558c0) (0xc000500960) Stream added, broadcasting: 5\nI0701 10:58:08.109584 100 log.go:172] (0xc000b558c0) Reply frame received for 5\nI0701 10:58:08.166282 100 log.go:172] (0xc000b558c0) Data frame received for 3\nI0701 10:58:08.166331 100 log.go:172] (0xc0005e5540) (3) Data frame handling\nI0701 10:58:08.166451 100 log.go:172] (0xc000b558c0) Data frame received for 5\nI0701 10:58:08.166555 100 log.go:172] (0xc000500960) (5) Data frame handling\nI0701 10:58:08.166656 100 log.go:172] (0xc000500960) (5) Data frame sent\nI0701 10:58:08.166685 100 log.go:172] (0xc000b558c0) Data frame received for 5\nI0701 10:58:08.166703 100 log.go:172] (0xc000500960) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.15 30314\nConnection to 172.17.0.15 30314 port [tcp/30314] succeeded!\nI0701 10:58:08.167659 100 log.go:172] (0xc000b558c0) Data frame received for 1\nI0701 10:58:08.167696 100 log.go:172] (0xc000a428c0) (1) Data frame handling\nI0701 10:58:08.167738 100 log.go:172] (0xc000a428c0) (1) Data frame sent\nI0701 10:58:08.167793 100 log.go:172] (0xc000b558c0) (0xc000a428c0) Stream removed, broadcasting: 1\nI0701 10:58:08.167831 100 log.go:172] (0xc000b558c0) Go away received\nI0701 10:58:08.168325 100 log.go:172] (0xc000b558c0) (0xc000a428c0) Stream removed, broadcasting: 1\nI0701 10:58:08.168349 100 log.go:172] (0xc000b558c0) (0xc0005e5540) Stream removed, broadcasting: 3\nI0701 10:58:08.168368 100 log.go:172] (0xc000b558c0) (0xc000500960) Stream removed, broadcasting: 5\n" Jul 1 10:58:08.175: INFO: stdout: "" Jul 1 10:58:08.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4703 execpod67zcq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 30314' Jul 1 10:58:08.386: INFO: stderr: "I0701 10:58:08.298002 122 log.go:172] (0xc00003a0b0) (0xc00091c000) Create stream\nI0701 10:58:08.298065 122 log.go:172] (0xc00003a0b0) (0xc00091c000) Stream added, broadcasting: 1\nI0701 10:58:08.300360 122 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0701 10:58:08.300397 122 log.go:172] (0xc00003a0b0) (0xc00091c0a0) Create stream\nI0701 10:58:08.300410 122 log.go:172] (0xc00003a0b0) (0xc00091c0a0) Stream added, broadcasting: 3\nI0701 10:58:08.301583 122 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0701 10:58:08.301635 122 log.go:172] (0xc00003a0b0) (0xc0007fb4a0) Create stream\nI0701 10:58:08.301668 122 log.go:172] (0xc00003a0b0) (0xc0007fb4a0) Stream added, broadcasting: 5\nI0701 10:58:08.302721 122 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0701 10:58:08.379190 122 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0701 10:58:08.379225 122 log.go:172] (0xc00091c0a0) (3) Data frame handling\nI0701 10:58:08.379264 122 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0701 10:58:08.379275 122 log.go:172] (0xc0007fb4a0) (5) Data frame handling\nI0701 10:58:08.379292 122 log.go:172] (0xc0007fb4a0) (5) Data frame sent\nI0701 10:58:08.379303 122 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0701 10:58:08.379322 122 log.go:172] (0xc0007fb4a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 30314\nConnection to 172.17.0.18 30314 port [tcp/30314] succeeded!\nI0701 10:58:08.380623 122 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0701 10:58:08.380643 122 log.go:172] (0xc00091c000) (1) Data frame handling\nI0701 10:58:08.380652 122 log.go:172] (0xc00091c000) (1) Data frame sent\nI0701 10:58:08.380836 122 log.go:172] (0xc00003a0b0) (0xc00091c000) Stream removed, broadcasting: 1\nI0701 10:58:08.380879 122 log.go:172] (0xc00003a0b0) Go away received\nI0701 10:58:08.381488 122 log.go:172] (0xc00003a0b0) (0xc00091c000) Stream removed, broadcasting: 1\nI0701 10:58:08.381512 122 log.go:172] (0xc00003a0b0) (0xc00091c0a0) Stream removed, broadcasting: 3\nI0701 10:58:08.381523 122 log.go:172] (0xc00003a0b0) (0xc0007fb4a0) Stream removed, broadcasting: 5\n" Jul 1 10:58:08.386: INFO: stdout: "" Jul 1 10:58:08.386: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:58:08.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4703" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.339 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":16,"skipped":234,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:58:08.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-259.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-259.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-259.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-259.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 10:58:16.553: INFO: DNS probes using dns-test-e9cebc65-f426-481b-b9b7-ffbbf4ef96d9 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-259.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-259.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-259.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-259.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 10:58:24.703: INFO: File wheezy_udp@dns-test-service-3.dns-259.svc.cluster.local from pod dns-259/dns-test-6e937ae8-2cde-47bc-b9fe-176321d0a54e contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 10:58:24.707: INFO: File jessie_udp@dns-test-service-3.dns-259.svc.cluster.local from pod dns-259/dns-test-6e937ae8-2cde-47bc-b9fe-176321d0a54e contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 10:58:24.707: INFO: Lookups using dns-259/dns-test-6e937ae8-2cde-47bc-b9fe-176321d0a54e failed for: [wheezy_udp@dns-test-service-3.dns-259.svc.cluster.local jessie_udp@dns-test-service-3.dns-259.svc.cluster.local] Jul 1 10:58:29.711: INFO: File wheezy_udp@dns-test-service-3.dns-259.svc.cluster.local from pod dns-259/dns-test-6e937ae8-2cde-47bc-b9fe-176321d0a54e contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 10:58:29.714: INFO: File jessie_udp@dns-test-service-3.dns-259.svc.cluster.local from pod dns-259/dns-test-6e937ae8-2cde-47bc-b9fe-176321d0a54e contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 10:58:29.714: INFO: Lookups using dns-259/dns-test-6e937ae8-2cde-47bc-b9fe-176321d0a54e failed for: [wheezy_udp@dns-test-service-3.dns-259.svc.cluster.local jessie_udp@dns-test-service-3.dns-259.svc.cluster.local] Jul 1 10:58:34.713: INFO: File wheezy_udp@dns-test-service-3.dns-259.svc.cluster.local from pod dns-259/dns-test-6e937ae8-2cde-47bc-b9fe-176321d0a54e contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 10:58:34.717: INFO: File jessie_udp@dns-test-service-3.dns-259.svc.cluster.local from pod dns-259/dns-test-6e937ae8-2cde-47bc-b9fe-176321d0a54e contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 10:58:34.717: INFO: Lookups using dns-259/dns-test-6e937ae8-2cde-47bc-b9fe-176321d0a54e failed for: [wheezy_udp@dns-test-service-3.dns-259.svc.cluster.local jessie_udp@dns-test-service-3.dns-259.svc.cluster.local] Jul 1 10:58:39.723: INFO: File wheezy_udp@dns-test-service-3.dns-259.svc.cluster.local from pod dns-259/dns-test-6e937ae8-2cde-47bc-b9fe-176321d0a54e contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 10:58:39.727: INFO: File jessie_udp@dns-test-service-3.dns-259.svc.cluster.local from pod dns-259/dns-test-6e937ae8-2cde-47bc-b9fe-176321d0a54e contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 10:58:39.727: INFO: Lookups using dns-259/dns-test-6e937ae8-2cde-47bc-b9fe-176321d0a54e failed for: [wheezy_udp@dns-test-service-3.dns-259.svc.cluster.local jessie_udp@dns-test-service-3.dns-259.svc.cluster.local] Jul 1 10:58:44.725: INFO: DNS probes using dns-test-6e937ae8-2cde-47bc-b9fe-176321d0a54e succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-259.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-259.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-259.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-259.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 10:58:54.035: INFO: DNS probes using dns-test-65028929-374d-4cd7-8c58-c99059863ecb succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:58:54.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-259" for this suite. • [SLOW TEST:45.717 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":17,"skipped":243,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:58:54.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:58:54.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7266" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":18,"skipped":320,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:58:54.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Jul 1 10:58:54.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-9170 -- logs-generator --log-lines-total 100 --run-duration 20s' Jul 1 10:58:55.057: INFO: stderr: "" Jul 1 10:58:55.057: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Jul 1 10:58:55.057: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jul 1 10:58:55.057: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-9170" to be "running and ready, or succeeded" Jul 1 10:58:55.069: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 12.007257ms Jul 1 10:58:57.223: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166297634s Jul 1 10:58:59.227: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169919315s Jul 1 10:59:01.238: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.180729728s Jul 1 10:59:01.238: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jul 1 10:59:01.238: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jul 1 10:59:01.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9170' Jul 1 10:59:01.381: INFO: stderr: "" Jul 1 10:59:01.381: INFO: stdout: "I0701 10:59:00.094108 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/w84 407\nI0701 10:59:00.314407 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/wqt 363\nI0701 10:59:00.494349 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/p8v 526\nI0701 10:59:00.694273 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/ddw 515\nI0701 10:59:00.894299 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/8gqq 251\nI0701 10:59:01.094217 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/pvmp 587\nI0701 10:59:01.294275 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/kbcf 541\n" STEP: limiting log lines Jul 1 10:59:01.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9170 --tail=1' Jul 1 10:59:01.494: INFO: stderr: "" Jul 1 10:59:01.494: INFO: stdout: "I0701 10:59:01.294275 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/kbcf 541\n" Jul 1 10:59:01.494: INFO: got output "I0701 10:59:01.294275 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/kbcf 541\n" STEP: limiting log bytes Jul 1 10:59:01.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9170 --limit-bytes=1' Jul 1 10:59:01.605: INFO: stderr: "" Jul 1 10:59:01.605: INFO: stdout: "I" Jul 1 10:59:01.605: INFO: got output "I" STEP: exposing timestamps Jul 1 10:59:01.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9170 --tail=1 --timestamps' Jul 1 10:59:01.715: INFO: stderr: "" Jul 1 10:59:01.715: INFO: stdout: "2020-07-01T10:59:01.49435914Z I0701 10:59:01.494231 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/qdvw 439\n" Jul 1 10:59:01.715: INFO: got output "2020-07-01T10:59:01.49435914Z I0701 10:59:01.494231 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/qdvw 439\n" STEP: restricting to a time range Jul 1 10:59:04.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9170 --since=1s' Jul 1 10:59:04.329: INFO: stderr: "" Jul 1 10:59:04.329: INFO: stdout: "I0701 10:59:03.494225 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/s4j6 394\nI0701 10:59:03.694288 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/947 453\nI0701 10:59:03.894329 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/f5mf 589\nI0701 10:59:04.094396 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/2tq5 561\nI0701 10:59:04.294230 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/s8ll 480\n" Jul 1 10:59:04.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9170 --since=24h' Jul 1 10:59:04.449: INFO: stderr: "" Jul 1 10:59:04.449: INFO: stdout: "I0701 10:59:00.094108 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/w84 407\nI0701 10:59:00.314407 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/wqt 363\nI0701 10:59:00.494349 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/p8v 526\nI0701 10:59:00.694273 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/ddw 515\nI0701 10:59:00.894299 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/8gqq 251\nI0701 10:59:01.094217 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/pvmp 587\nI0701 10:59:01.294275 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/kbcf 541\nI0701 10:59:01.494231 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/qdvw 439\nI0701 10:59:01.694266 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/cs8 546\nI0701 10:59:01.894324 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/8gx 349\nI0701 10:59:02.094257 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/bwj8 424\nI0701 10:59:02.294256 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/g8w 362\nI0701 10:59:02.494271 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/6bm 338\nI0701 10:59:02.694320 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/9smt 588\nI0701 10:59:02.894261 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/klh 346\nI0701 10:59:03.094296 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/zh9v 280\nI0701 10:59:03.294280 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/86d 354\nI0701 10:59:03.494225 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/s4j6 394\nI0701 10:59:03.694288 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/947 453\nI0701 10:59:03.894329 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/f5mf 589\nI0701 10:59:04.094396 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/2tq5 561\nI0701 10:59:04.294230 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/s8ll 480\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Jul 1 10:59:04.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-9170' Jul 1 10:59:13.712: INFO: stderr: "" Jul 1 10:59:13.712: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:59:13.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9170" for this suite. • [SLOW TEST:18.920 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":19,"skipped":332,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:59:13.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-sccv STEP: Creating a pod to test atomic-volume-subpath Jul 1 10:59:13.880: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-sccv" in namespace "subpath-8926" to be "Succeeded or Failed" Jul 1 10:59:13.883: INFO: Pod "pod-subpath-test-configmap-sccv": Phase="Pending", Reason="", readiness=false. Elapsed: 3.250764ms Jul 1 10:59:16.080: INFO: Pod "pod-subpath-test-configmap-sccv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199639372s Jul 1 10:59:18.084: INFO: Pod "pod-subpath-test-configmap-sccv": Phase="Running", Reason="", readiness=true. Elapsed: 4.204412487s Jul 1 10:59:20.089: INFO: Pod "pod-subpath-test-configmap-sccv": Phase="Running", Reason="", readiness=true. Elapsed: 6.209536531s Jul 1 10:59:22.094: INFO: Pod "pod-subpath-test-configmap-sccv": Phase="Running", Reason="", readiness=true. Elapsed: 8.214542184s Jul 1 10:59:24.099: INFO: Pod "pod-subpath-test-configmap-sccv": Phase="Running", Reason="", readiness=true. Elapsed: 10.219012205s Jul 1 10:59:26.103: INFO: Pod "pod-subpath-test-configmap-sccv": Phase="Running", Reason="", readiness=true. Elapsed: 12.223483942s Jul 1 10:59:28.108: INFO: Pod "pod-subpath-test-configmap-sccv": Phase="Running", Reason="", readiness=true. Elapsed: 14.228084085s Jul 1 10:59:30.112: INFO: Pod "pod-subpath-test-configmap-sccv": Phase="Running", Reason="", readiness=true. Elapsed: 16.232509044s Jul 1 10:59:32.117: INFO: Pod "pod-subpath-test-configmap-sccv": Phase="Running", Reason="", readiness=true. Elapsed: 18.237447598s Jul 1 10:59:34.122: INFO: Pod "pod-subpath-test-configmap-sccv": Phase="Running", Reason="", readiness=true. Elapsed: 20.242020575s Jul 1 10:59:36.126: INFO: Pod "pod-subpath-test-configmap-sccv": Phase="Running", Reason="", readiness=true. Elapsed: 22.246568835s Jul 1 10:59:38.131: INFO: Pod "pod-subpath-test-configmap-sccv": Phase="Running", Reason="", readiness=true. Elapsed: 24.25153998s Jul 1 10:59:40.136: INFO: Pod "pod-subpath-test-configmap-sccv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.256315247s STEP: Saw pod success Jul 1 10:59:40.136: INFO: Pod "pod-subpath-test-configmap-sccv" satisfied condition "Succeeded or Failed" Jul 1 10:59:40.139: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-sccv container test-container-subpath-configmap-sccv: STEP: delete the pod Jul 1 10:59:40.194: INFO: Waiting for pod pod-subpath-test-configmap-sccv to disappear Jul 1 10:59:40.196: INFO: Pod pod-subpath-test-configmap-sccv no longer exists STEP: Deleting pod pod-subpath-test-configmap-sccv Jul 1 10:59:40.196: INFO: Deleting pod "pod-subpath-test-configmap-sccv" in namespace "subpath-8926" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:59:40.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8926" for this suite. • [SLOW TEST:26.495 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":20,"skipped":359,"failed":0} [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:59:40.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 1 10:59:40.326: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jul 1 10:59:45.335: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 1 10:59:45.335: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Jul 1 10:59:45.438: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-2027 /apis/apps/v1/namespaces/deployment-2027/deployments/test-cleanup-deployment dab6ec48-69d7-49ba-b043-e30f084a9f38 16783836 1 2020-07-01 10:59:45 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-07-01 10:59:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0025aab28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jul 1 10:59:45.548: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f deployment-2027 /apis/apps/v1/namespaces/deployment-2027/replicasets/test-cleanup-deployment-b4867b47f 1aba47f6-0670-44ac-adbf-d50ddcd121b0 16783843 1 2020-07-01 10:59:45 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment dab6ec48-69d7-49ba-b043-e30f084a9f38 0xc0025ab030 0xc0025ab031}] [] [{kube-controller-manager Update apps/v1 2020-07-01 10:59:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 97 98 54 101 99 52 56 45 54 57 100 55 45 52 57 98 97 45 98 48 52 51 45 101 51 48 102 48 56 52 97 57 102 51 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0025ab0a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 1 10:59:45.549: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jul 1 10:59:45.549: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-2027 /apis/apps/v1/namespaces/deployment-2027/replicasets/test-cleanup-controller 9de8eb86-8471-424c-a158-f351a758ef75 16783837 1 2020-07-01 10:59:40 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment dab6ec48-69d7-49ba-b043-e30f084a9f38 0xc0025aaf1f 0xc0025aaf30}] [] [{e2e.test Update apps/v1 2020-07-01 10:59:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-01 10:59:45 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 97 98 54 101 99 52 56 45 54 57 100 55 45 52 57 98 97 45 98 48 52 51 45 101 51 48 102 48 56 52 97 57 102 51 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0025aafc8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jul 1 10:59:45.596: INFO: Pod "test-cleanup-controller-q7vbr" is available: &Pod{ObjectMeta:{test-cleanup-controller-q7vbr test-cleanup-controller- deployment-2027 /api/v1/namespaces/deployment-2027/pods/test-cleanup-controller-q7vbr 72ed93e1-689d-4498-b811-8209fa761973 16783822 0 2020-07-01 10:59:40 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 9de8eb86-8471-424c-a158-f351a758ef75 0xc0025ab5a7 0xc0025ab5a8}] [] [{kube-controller-manager Update v1 2020-07-01 10:59:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 100 101 56 101 98 56 54 45 56 52 55 49 45 52 50 52 99 45 97 49 53 56 45 102 51 53 49 97 55 53 56 101 102 55 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 10:59:43 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 57 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s985f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s985f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s985f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 10:59:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 10:59:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 10:59:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 10:59:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.91,StartTime:2020-07-01 10:59:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 10:59:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3e69f00644b06b5acee1cd35f0e558fb5effcf257170876242b4b18d54f287db,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.91,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 10:59:45.596: INFO: Pod "test-cleanup-deployment-b4867b47f-h596q" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-h596q test-cleanup-deployment-b4867b47f- deployment-2027 /api/v1/namespaces/deployment-2027/pods/test-cleanup-deployment-b4867b47f-h596q 6dc706cb-957a-4057-9582-62c70c2b219f 16783842 0 2020-07-01 10:59:45 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f 1aba47f6-0670-44ac-adbf-d50ddcd121b0 0xc0025ab780 0xc0025ab781}] [] [{kube-controller-manager Update v1 2020-07-01 10:59:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 97 98 97 52 55 102 54 45 48 54 55 48 45 52 52 97 99 45 97 100 98 102 45 100 53 48 100 100 99 100 49 50 49 98 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s985f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s985f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s985f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 10:59:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:59:45.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2027" for this suite. • [SLOW TEST:5.410 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":21,"skipped":359,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:59:45.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 1 10:59:45.728: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jul 1 10:59:46.819: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 10:59:47.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6067" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":22,"skipped":377,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 10:59:47.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jul 1 10:59:48.705: INFO: >>> kubeConfig: /root/.kube/config Jul 1 10:59:51.760: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:00:02.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9689" for this suite. • [SLOW TEST:14.663 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":23,"skipped":381,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:00:02.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:00:09.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3643" for this suite. STEP: Destroying namespace "nsdeletetest-948" for this suite. Jul 1 11:00:09.053: INFO: Namespace nsdeletetest-948 was already deleted STEP: Destroying namespace "nsdeletetest-3538" for this suite. • [SLOW TEST:6.483 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":24,"skipped":392,"failed":0} S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:00:09.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jul 1 11:00:09.939: INFO: Pod name wrapped-volume-race-80af8e5b-ddab-4be2-b451-3ddf8465658f: Found 0 pods out of 5 Jul 1 11:00:14.957: INFO: Pod name wrapped-volume-race-80af8e5b-ddab-4be2-b451-3ddf8465658f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-80af8e5b-ddab-4be2-b451-3ddf8465658f in namespace emptydir-wrapper-471, will wait for the garbage collector to delete the pods Jul 1 11:00:27.562: INFO: Deleting ReplicationController wrapped-volume-race-80af8e5b-ddab-4be2-b451-3ddf8465658f took: 77.881075ms Jul 1 11:00:28.262: INFO: Terminating ReplicationController wrapped-volume-race-80af8e5b-ddab-4be2-b451-3ddf8465658f pods took: 700.346787ms STEP: Creating RC which spawns configmap-volume pods Jul 1 11:00:43.631: INFO: Pod name wrapped-volume-race-e9cf2566-9d02-4cee-9c1f-72526dbe25d7: Found 0 pods out of 5 Jul 1 11:00:48.641: INFO: Pod name wrapped-volume-race-e9cf2566-9d02-4cee-9c1f-72526dbe25d7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e9cf2566-9d02-4cee-9c1f-72526dbe25d7 in namespace emptydir-wrapper-471, will wait for the garbage collector to delete the pods Jul 1 11:01:02.783: INFO: Deleting ReplicationController wrapped-volume-race-e9cf2566-9d02-4cee-9c1f-72526dbe25d7 took: 7.135164ms Jul 1 11:01:03.084: INFO: Terminating ReplicationController wrapped-volume-race-e9cf2566-9d02-4cee-9c1f-72526dbe25d7 pods took: 300.240822ms STEP: Creating RC which spawns configmap-volume pods Jul 1 11:01:14.237: INFO: Pod name wrapped-volume-race-05349646-23a3-4f8d-b497-b07b930f223c: Found 0 pods out of 5 Jul 1 11:01:19.244: INFO: Pod name wrapped-volume-race-05349646-23a3-4f8d-b497-b07b930f223c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-05349646-23a3-4f8d-b497-b07b930f223c in namespace emptydir-wrapper-471, will wait for the garbage collector to delete the pods Jul 1 11:01:35.369: INFO: Deleting ReplicationController wrapped-volume-race-05349646-23a3-4f8d-b497-b07b930f223c took: 20.296317ms Jul 1 11:01:37.469: INFO: Terminating ReplicationController wrapped-volume-race-05349646-23a3-4f8d-b497-b07b930f223c pods took: 2.100370836s STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:01:54.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-471" for this suite. • [SLOW TEST:105.757 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":25,"skipped":393,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:01:54.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jul 1 11:01:54.957: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6289 /api/v1/namespaces/watch-6289/configmaps/e2e-watch-test-resource-version d0e1dc3b-a3be-4f32-a96d-6ae649ed7958 16785023 0 2020-07-01 11:01:54 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-07-01 11:01:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 1 11:01:54.957: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6289 /api/v1/namespaces/watch-6289/configmaps/e2e-watch-test-resource-version d0e1dc3b-a3be-4f32-a96d-6ae649ed7958 16785024 0 2020-07-01 11:01:54 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-07-01 11:01:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:01:54.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6289" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":26,"skipped":405,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:01:54.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Jul 1 11:01:55.056: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 1 11:01:55.068: INFO: Waiting for terminating namespaces to be deleted... Jul 1 11:01:55.071: INFO: Logging pods the kubelet thinks is on node kali-worker before test Jul 1 11:01:55.090: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) Jul 1 11:01:55.090: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 11:01:55.090: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) Jul 1 11:01:55.090: INFO: Container kindnet-cni ready: true, restart count 7 Jul 1 11:01:55.090: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Jul 1 11:01:55.110: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) Jul 1 11:01:55.110: INFO: Container kindnet-cni ready: true, restart count 5 Jul 1 11:01:55.110: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) Jul 1 11:01:55.110: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-020e10c1-3463-4870-a614-e9074927fb6c 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-020e10c1-3463-4870-a614-e9074927fb6c off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-020e10c1-3463-4870-a614-e9074927fb6c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:07:05.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5440" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:310.522 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":27,"skipped":406,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:07:05.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 11:07:06.460: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 11:07:08.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729198426, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729198426, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729198426, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729198426, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 11:07:11.502: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 1 11:07:11.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4284-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:07:12.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8963" for this suite. STEP: Destroying namespace "webhook-8963-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.353 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":28,"skipped":427,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:07:12.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jul 1 11:07:12.935: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9718 /api/v1/namespaces/watch-9718/configmaps/e2e-watch-test-label-changed 3d20b77f-95ae-43d0-9672-5793b83b1885 16786234 0 2020-07-01 11:07:12 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-07-01 11:07:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 1 11:07:12.935: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9718 /api/v1/namespaces/watch-9718/configmaps/e2e-watch-test-label-changed 3d20b77f-95ae-43d0-9672-5793b83b1885 16786235 0 2020-07-01 11:07:12 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-07-01 11:07:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 1 11:07:12.936: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9718 /api/v1/namespaces/watch-9718/configmaps/e2e-watch-test-label-changed 3d20b77f-95ae-43d0-9672-5793b83b1885 16786236 0 2020-07-01 11:07:12 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-07-01 11:07:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jul 1 11:07:23.012: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9718 /api/v1/namespaces/watch-9718/configmaps/e2e-watch-test-label-changed 3d20b77f-95ae-43d0-9672-5793b83b1885 16786286 0 2020-07-01 11:07:12 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-07-01 11:07:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 1 11:07:23.012: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9718 /api/v1/namespaces/watch-9718/configmaps/e2e-watch-test-label-changed 3d20b77f-95ae-43d0-9672-5793b83b1885 16786287 0 2020-07-01 11:07:12 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-07-01 11:07:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 1 11:07:23.012: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9718 /api/v1/namespaces/watch-9718/configmaps/e2e-watch-test-label-changed 3d20b77f-95ae-43d0-9672-5793b83b1885 16786288 0 2020-07-01 11:07:12 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-07-01 11:07:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:07:23.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9718" for this suite. • [SLOW TEST:10.250 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":29,"skipped":445,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:07:23.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 1 11:07:23.155: INFO: Waiting up to 5m0s for pod "pod-4374a5d2-34ba-4ab9-ad97-40eb012d5bef" in namespace "emptydir-5399" to be "Succeeded or Failed" Jul 1 11:07:23.159: INFO: Pod "pod-4374a5d2-34ba-4ab9-ad97-40eb012d5bef": Phase="Pending", Reason="", readiness=false. Elapsed: 3.971708ms Jul 1 11:07:25.175: INFO: Pod "pod-4374a5d2-34ba-4ab9-ad97-40eb012d5bef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020275995s Jul 1 11:07:27.179: INFO: Pod "pod-4374a5d2-34ba-4ab9-ad97-40eb012d5bef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02452183s STEP: Saw pod success Jul 1 11:07:27.180: INFO: Pod "pod-4374a5d2-34ba-4ab9-ad97-40eb012d5bef" satisfied condition "Succeeded or Failed" Jul 1 11:07:27.183: INFO: Trying to get logs from node kali-worker pod pod-4374a5d2-34ba-4ab9-ad97-40eb012d5bef container test-container: STEP: delete the pod Jul 1 11:07:27.215: INFO: Waiting for pod pod-4374a5d2-34ba-4ab9-ad97-40eb012d5bef to disappear Jul 1 11:07:27.249: INFO: Pod pod-4374a5d2-34ba-4ab9-ad97-40eb012d5bef no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:07:27.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5399" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":30,"skipped":456,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:07:27.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-5ab1bb64-ff8e-487d-b2f8-99723e21c310 STEP: Creating a pod to test consume configMaps Jul 1 11:07:27.530: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bc1b1677-0ef7-490c-bdee-103c28deae49" in namespace "projected-9450" to be "Succeeded or Failed" Jul 1 11:07:27.536: INFO: Pod "pod-projected-configmaps-bc1b1677-0ef7-490c-bdee-103c28deae49": Phase="Pending", Reason="", readiness=false. Elapsed: 5.911565ms Jul 1 11:07:29.630: INFO: Pod "pod-projected-configmaps-bc1b1677-0ef7-490c-bdee-103c28deae49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099636953s Jul 1 11:07:31.634: INFO: Pod "pod-projected-configmaps-bc1b1677-0ef7-490c-bdee-103c28deae49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103807747s STEP: Saw pod success Jul 1 11:07:31.634: INFO: Pod "pod-projected-configmaps-bc1b1677-0ef7-490c-bdee-103c28deae49" satisfied condition "Succeeded or Failed" Jul 1 11:07:31.638: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-bc1b1677-0ef7-490c-bdee-103c28deae49 container projected-configmap-volume-test: STEP: delete the pod Jul 1 11:07:31.768: INFO: Waiting for pod pod-projected-configmaps-bc1b1677-0ef7-490c-bdee-103c28deae49 to disappear Jul 1 11:07:31.947: INFO: Pod pod-projected-configmaps-bc1b1677-0ef7-490c-bdee-103c28deae49 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:07:31.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9450" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":31,"skipped":479,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:07:32.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 1 11:07:40.314: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 11:07:40.380: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 11:07:42.380: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 11:07:42.625: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 11:07:44.380: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 11:07:44.385: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 11:07:46.380: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 11:07:46.386: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 11:07:48.380: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 11:07:48.385: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 11:07:50.380: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 11:07:50.383: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 11:07:52.380: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 11:07:52.385: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 11:07:54.380: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 11:07:54.384: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:07:54.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7957" for this suite. • [SLOW TEST:22.364 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":504,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:07:54.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 11:07:54.984: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 11:07:57.012: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729198474, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729198474, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729198475, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729198474, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 11:08:00.572: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:08:00.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4089" for this suite. STEP: Destroying namespace "webhook-4089-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.056 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":33,"skipped":505,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:08:02.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-ef34cfe3-c64f-4e1a-8add-c604f2a24fbc STEP: Creating a pod to test consume secrets Jul 1 11:08:03.449: INFO: Waiting up to 5m0s for pod "pod-secrets-0954db0c-09a8-4ade-a264-5cff5bb8f36f" in namespace "secrets-2326" to be "Succeeded or Failed" Jul 1 11:08:03.601: INFO: Pod "pod-secrets-0954db0c-09a8-4ade-a264-5cff5bb8f36f": Phase="Pending", Reason="", readiness=false. Elapsed: 151.683339ms Jul 1 11:08:05.605: INFO: Pod "pod-secrets-0954db0c-09a8-4ade-a264-5cff5bb8f36f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155635452s Jul 1 11:08:07.607: INFO: Pod "pod-secrets-0954db0c-09a8-4ade-a264-5cff5bb8f36f": Phase="Running", Reason="", readiness=true. Elapsed: 4.158177703s Jul 1 11:08:09.611: INFO: Pod "pod-secrets-0954db0c-09a8-4ade-a264-5cff5bb8f36f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.161906197s STEP: Saw pod success Jul 1 11:08:09.611: INFO: Pod "pod-secrets-0954db0c-09a8-4ade-a264-5cff5bb8f36f" satisfied condition "Succeeded or Failed" Jul 1 11:08:09.614: INFO: Trying to get logs from node kali-worker pod pod-secrets-0954db0c-09a8-4ade-a264-5cff5bb8f36f container secret-env-test: STEP: delete the pod Jul 1 11:08:09.648: INFO: Waiting for pod pod-secrets-0954db0c-09a8-4ade-a264-5cff5bb8f36f to disappear Jul 1 11:08:09.664: INFO: Pod pod-secrets-0954db0c-09a8-4ade-a264-5cff5bb8f36f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:08:09.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2326" for this suite. • [SLOW TEST:7.241 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":520,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:08:09.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 1 11:08:09.748: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:08:13.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8178" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":535,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:08:13.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:08:18.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3527" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":36,"skipped":550,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:08:18.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-2383 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2383 STEP: creating replication controller externalsvc in namespace services-2383 I0701 11:08:18.385814 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2383, replica count: 2 I0701 11:08:21.436412 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 11:08:24.436734 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jul 1 11:08:24.471: INFO: Creating new exec pod Jul 1 11:08:28.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-2383 execpod78wc8 -- /bin/sh -x -c nslookup clusterip-service' Jul 1 11:08:31.760: INFO: stderr: "I0701 11:08:31.624451 312 log.go:172] (0xc00003ac60) (0xc000697720) Create stream\nI0701 11:08:31.624504 312 log.go:172] (0xc00003ac60) (0xc000697720) Stream added, broadcasting: 1\nI0701 11:08:31.627228 312 log.go:172] (0xc00003ac60) Reply frame received for 1\nI0701 11:08:31.627265 312 log.go:172] (0xc00003ac60) (0xc00061d5e0) Create stream\nI0701 11:08:31.627284 312 log.go:172] (0xc00003ac60) (0xc00061d5e0) Stream added, broadcasting: 3\nI0701 11:08:31.628384 312 log.go:172] (0xc00003ac60) Reply frame received for 3\nI0701 11:08:31.628426 312 log.go:172] (0xc00003ac60) (0xc000538a00) Create stream\nI0701 11:08:31.628442 312 log.go:172] (0xc00003ac60) (0xc000538a00) Stream added, broadcasting: 5\nI0701 11:08:31.629750 312 log.go:172] (0xc00003ac60) Reply frame received for 5\nI0701 11:08:31.713873 312 log.go:172] (0xc00003ac60) Data frame received for 5\nI0701 11:08:31.713895 312 log.go:172] (0xc000538a00) (5) Data frame handling\nI0701 11:08:31.713906 312 log.go:172] (0xc000538a00) (5) Data frame sent\n+ nslookup clusterip-service\nI0701 11:08:31.750586 312 log.go:172] (0xc00003ac60) Data frame received for 3\nI0701 11:08:31.750623 312 log.go:172] (0xc00061d5e0) (3) Data frame handling\nI0701 11:08:31.750655 312 log.go:172] (0xc00061d5e0) (3) Data frame sent\nI0701 11:08:31.751817 312 log.go:172] (0xc00003ac60) Data frame received for 3\nI0701 11:08:31.751841 312 log.go:172] (0xc00061d5e0) (3) Data frame handling\nI0701 11:08:31.751860 312 log.go:172] (0xc00061d5e0) (3) Data frame sent\nI0701 11:08:31.752695 312 log.go:172] (0xc00003ac60) Data frame received for 5\nI0701 11:08:31.752742 312 log.go:172] (0xc000538a00) (5) Data frame handling\nI0701 11:08:31.752865 312 log.go:172] (0xc00003ac60) Data frame received for 3\nI0701 11:08:31.752890 312 log.go:172] (0xc00061d5e0) (3) Data frame handling\nI0701 11:08:31.755419 312 log.go:172] (0xc00003ac60) Data frame received for 1\nI0701 11:08:31.755459 312 log.go:172] (0xc000697720) (1) Data frame handling\nI0701 11:08:31.755485 312 log.go:172] (0xc000697720) (1) Data frame sent\nI0701 11:08:31.755509 312 log.go:172] (0xc00003ac60) (0xc000697720) Stream removed, broadcasting: 1\nI0701 11:08:31.755971 312 log.go:172] (0xc00003ac60) (0xc000697720) Stream removed, broadcasting: 1\nI0701 11:08:31.755994 312 log.go:172] (0xc00003ac60) (0xc00061d5e0) Stream removed, broadcasting: 3\nI0701 11:08:31.756005 312 log.go:172] (0xc00003ac60) (0xc000538a00) Stream removed, broadcasting: 5\n" Jul 1 11:08:31.761: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-2383.svc.cluster.local\tcanonical name = externalsvc.services-2383.svc.cluster.local.\nName:\texternalsvc.services-2383.svc.cluster.local\nAddress: 10.96.150.115\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2383, will wait for the garbage collector to delete the pods Jul 1 11:08:31.850: INFO: Deleting ReplicationController externalsvc took: 6.853265ms Jul 1 11:08:32.250: INFO: Terminating ReplicationController externalsvc pods took: 400.272086ms Jul 1 11:08:43.835: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:08:43.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2383" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:25.836 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":37,"skipped":568,"failed":0} [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:08:43.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-405b08e7-d7b8-4dcf-a4fe-9b66ae9b046c STEP: Creating configMap with name cm-test-opt-upd-843b522d-797c-41dc-9395-44c21909530a STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-405b08e7-d7b8-4dcf-a4fe-9b66ae9b046c STEP: Updating configmap cm-test-opt-upd-843b522d-797c-41dc-9395-44c21909530a STEP: Creating configMap with name cm-test-opt-create-361e018c-6117-4a78-bdce-28c943e53289 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:08:54.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1280" for this suite. • [SLOW TEST:10.267 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":38,"skipped":568,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:08:54.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jul 1 11:09:02.965: INFO: 10 pods remaining Jul 1 11:09:02.965: INFO: 0 pods has nil DeletionTimestamp Jul 1 11:09:02.965: INFO: Jul 1 11:09:04.561: INFO: 0 pods remaining Jul 1 11:09:04.561: INFO: 0 pods has nil DeletionTimestamp Jul 1 11:09:04.561: INFO: Jul 1 11:09:05.952: INFO: 0 pods remaining Jul 1 11:09:05.952: INFO: 0 pods has nil DeletionTimestamp Jul 1 11:09:05.952: INFO: STEP: Gathering metrics W0701 11:09:06.393645 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 11:09:06.393: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:09:06.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4094" for this suite. • [SLOW TEST:12.276 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":39,"skipped":571,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:09:06.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Jul 1 11:09:06.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2309' Jul 1 11:09:08.522: INFO: stderr: "" Jul 1 11:09:08.522: INFO: stdout: "pod/pause created\n" Jul 1 11:09:08.522: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jul 1 11:09:08.522: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2309" to be "running and ready" Jul 1 11:09:08.599: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 77.300865ms Jul 1 11:09:10.727: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205335248s Jul 1 11:09:12.776: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.253560928s Jul 1 11:09:14.780: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.25766374s Jul 1 11:09:14.780: INFO: Pod "pause" satisfied condition "running and ready" Jul 1 11:09:14.780: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Jul 1 11:09:14.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2309' Jul 1 11:09:14.886: INFO: stderr: "" Jul 1 11:09:14.886: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jul 1 11:09:14.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2309' Jul 1 11:09:14.986: INFO: stderr: "" Jul 1 11:09:14.986: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod Jul 1 11:09:14.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2309' Jul 1 11:09:15.118: INFO: stderr: "" Jul 1 11:09:15.118: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jul 1 11:09:15.119: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2309' Jul 1 11:09:15.257: INFO: stderr: "" Jul 1 11:09:15.257: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Jul 1 11:09:15.257: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2309' Jul 1 11:09:15.401: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 11:09:15.401: INFO: stdout: "pod \"pause\" force deleted\n" Jul 1 11:09:15.401: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2309' Jul 1 11:09:15.624: INFO: stderr: "No resources found in kubectl-2309 namespace.\n" Jul 1 11:09:15.624: INFO: stdout: "" Jul 1 11:09:15.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2309 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 1 11:09:15.721: INFO: stderr: "" Jul 1 11:09:15.721: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:09:15.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2309" for this suite. • [SLOW TEST:9.322 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":40,"skipped":598,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:09:15.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-555f723d-1527-4b77-8502-2befaae46496 STEP: Creating a pod to test consume secrets Jul 1 11:09:15.936: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-513b170e-8682-4083-a564-3c99152e76b1" in namespace "projected-7578" to be "Succeeded or Failed" Jul 1 11:09:15.972: INFO: Pod "pod-projected-secrets-513b170e-8682-4083-a564-3c99152e76b1": Phase="Pending", Reason="", readiness=false. Elapsed: 35.868796ms Jul 1 11:09:17.977: INFO: Pod "pod-projected-secrets-513b170e-8682-4083-a564-3c99152e76b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04033079s Jul 1 11:09:19.981: INFO: Pod "pod-projected-secrets-513b170e-8682-4083-a564-3c99152e76b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044937652s STEP: Saw pod success Jul 1 11:09:19.982: INFO: Pod "pod-projected-secrets-513b170e-8682-4083-a564-3c99152e76b1" satisfied condition "Succeeded or Failed" Jul 1 11:09:19.985: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-513b170e-8682-4083-a564-3c99152e76b1 container secret-volume-test: STEP: delete the pod Jul 1 11:09:20.142: INFO: Waiting for pod pod-projected-secrets-513b170e-8682-4083-a564-3c99152e76b1 to disappear Jul 1 11:09:20.149: INFO: Pod pod-projected-secrets-513b170e-8682-4083-a564-3c99152e76b1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:09:20.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7578" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":618,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:09:20.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Jul 1 11:09:20.505: INFO: Waiting up to 5m0s for pod "client-containers-9997d6f7-8625-4460-859f-445717786756" in namespace "containers-4118" to be "Succeeded or Failed" Jul 1 11:09:20.516: INFO: Pod "client-containers-9997d6f7-8625-4460-859f-445717786756": Phase="Pending", Reason="", readiness=false. Elapsed: 10.435029ms Jul 1 11:09:22.520: INFO: Pod "client-containers-9997d6f7-8625-4460-859f-445717786756": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01500134s Jul 1 11:09:24.817: INFO: Pod "client-containers-9997d6f7-8625-4460-859f-445717786756": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311963133s Jul 1 11:09:26.822: INFO: Pod "client-containers-9997d6f7-8625-4460-859f-445717786756": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.316398488s STEP: Saw pod success Jul 1 11:09:26.822: INFO: Pod "client-containers-9997d6f7-8625-4460-859f-445717786756" satisfied condition "Succeeded or Failed" Jul 1 11:09:26.825: INFO: Trying to get logs from node kali-worker pod client-containers-9997d6f7-8625-4460-859f-445717786756 container test-container: STEP: delete the pod Jul 1 11:09:26.915: INFO: Waiting for pod client-containers-9997d6f7-8625-4460-859f-445717786756 to disappear Jul 1 11:09:26.920: INFO: Pod client-containers-9997d6f7-8625-4460-859f-445717786756 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:09:26.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4118" for this suite. • [SLOW TEST:6.698 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":42,"skipped":641,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:09:26.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:09:32.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8634" for this suite. • [SLOW TEST:5.183 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":43,"skipped":651,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:09:32.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-7370878d-d32b-4d2c-a113-098c2617ecfe STEP: Creating a pod to test consume configMaps Jul 1 11:09:32.224: INFO: Waiting up to 5m0s for pod "pod-configmaps-84267b87-6509-4c7a-ad3c-cbd750f88b12" in namespace "configmap-9851" to be "Succeeded or Failed" Jul 1 11:09:32.317: INFO: Pod "pod-configmaps-84267b87-6509-4c7a-ad3c-cbd750f88b12": Phase="Pending", Reason="", readiness=false. Elapsed: 92.643826ms Jul 1 11:09:34.321: INFO: Pod "pod-configmaps-84267b87-6509-4c7a-ad3c-cbd750f88b12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097298406s Jul 1 11:09:36.326: INFO: Pod "pod-configmaps-84267b87-6509-4c7a-ad3c-cbd750f88b12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102382296s STEP: Saw pod success Jul 1 11:09:36.326: INFO: Pod "pod-configmaps-84267b87-6509-4c7a-ad3c-cbd750f88b12" satisfied condition "Succeeded or Failed" Jul 1 11:09:36.329: INFO: Trying to get logs from node kali-worker pod pod-configmaps-84267b87-6509-4c7a-ad3c-cbd750f88b12 container configmap-volume-test: STEP: delete the pod Jul 1 11:09:36.378: INFO: Waiting for pod pod-configmaps-84267b87-6509-4c7a-ad3c-cbd750f88b12 to disappear Jul 1 11:09:36.402: INFO: Pod pod-configmaps-84267b87-6509-4c7a-ad3c-cbd750f88b12 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:09:36.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9851" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":44,"skipped":662,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:09:36.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:09:40.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5271" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":668,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:09:40.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-1049 STEP: creating replication controller nodeport-test in namespace services-1049 I0701 11:09:40.773756 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-1049, replica count: 2 I0701 11:09:43.824246 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 11:09:46.824516 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 11:09:49.824774 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 1 11:09:49.824: INFO: Creating new exec pod Jul 1 11:09:54.997: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-1049 execpoddnn8b -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jul 1 11:09:55.226: INFO: stderr: "I0701 11:09:55.131147 500 log.go:172] (0xc000a8a8f0) (0xc000613540) Create stream\nI0701 11:09:55.131209 500 log.go:172] (0xc000a8a8f0) (0xc000613540) Stream added, broadcasting: 1\nI0701 11:09:55.133675 500 log.go:172] (0xc000a8a8f0) Reply frame received for 1\nI0701 11:09:55.133714 500 log.go:172] (0xc000a8a8f0) (0xc00097e000) Create stream\nI0701 11:09:55.133729 500 log.go:172] (0xc000a8a8f0) (0xc00097e000) Stream added, broadcasting: 3\nI0701 11:09:55.134557 500 log.go:172] (0xc000a8a8f0) Reply frame received for 3\nI0701 11:09:55.134598 500 log.go:172] (0xc000a8a8f0) (0xc0006135e0) Create stream\nI0701 11:09:55.134611 500 log.go:172] (0xc000a8a8f0) (0xc0006135e0) Stream added, broadcasting: 5\nI0701 11:09:55.135314 500 log.go:172] (0xc000a8a8f0) Reply frame received for 5\nI0701 11:09:55.190644 500 log.go:172] (0xc000a8a8f0) Data frame received for 5\nI0701 11:09:55.190672 500 log.go:172] (0xc0006135e0) (5) Data frame handling\nI0701 11:09:55.190689 500 log.go:172] (0xc0006135e0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0701 11:09:55.214229 500 log.go:172] (0xc000a8a8f0) Data frame received for 5\nI0701 11:09:55.214264 500 log.go:172] (0xc0006135e0) (5) Data frame handling\nI0701 11:09:55.214286 500 log.go:172] (0xc0006135e0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0701 11:09:55.214516 500 log.go:172] (0xc000a8a8f0) Data frame received for 3\nI0701 11:09:55.214543 500 log.go:172] (0xc00097e000) (3) Data frame handling\nI0701 11:09:55.214844 500 log.go:172] (0xc000a8a8f0) Data frame received for 5\nI0701 11:09:55.214865 500 log.go:172] (0xc0006135e0) (5) Data frame handling\nI0701 11:09:55.216901 500 log.go:172] (0xc000a8a8f0) Data frame received for 1\nI0701 11:09:55.216932 500 log.go:172] (0xc000613540) (1) Data frame handling\nI0701 11:09:55.216958 500 log.go:172] (0xc000613540) (1) Data frame sent\nI0701 11:09:55.216977 500 log.go:172] (0xc000a8a8f0) (0xc000613540) Stream removed, broadcasting: 1\nI0701 11:09:55.217001 500 log.go:172] (0xc000a8a8f0) Go away received\nI0701 11:09:55.217572 500 log.go:172] (0xc000a8a8f0) (0xc000613540) Stream removed, broadcasting: 1\nI0701 11:09:55.217597 500 log.go:172] (0xc000a8a8f0) (0xc00097e000) Stream removed, broadcasting: 3\nI0701 11:09:55.217611 500 log.go:172] (0xc000a8a8f0) (0xc0006135e0) Stream removed, broadcasting: 5\n" Jul 1 11:09:55.226: INFO: stdout: "" Jul 1 11:09:55.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-1049 execpoddnn8b -- /bin/sh -x -c nc -zv -t -w 2 10.97.150.132 80' Jul 1 11:09:55.446: INFO: stderr: "I0701 11:09:55.364126 521 log.go:172] (0xc000a8c0b0) (0xc000408d20) Create stream\nI0701 11:09:55.364184 521 log.go:172] (0xc000a8c0b0) (0xc000408d20) Stream added, broadcasting: 1\nI0701 11:09:55.366976 521 log.go:172] (0xc000a8c0b0) Reply frame received for 1\nI0701 11:09:55.367025 521 log.go:172] (0xc000a8c0b0) (0xc0006954a0) Create stream\nI0701 11:09:55.367044 521 log.go:172] (0xc000a8c0b0) (0xc0006954a0) Stream added, broadcasting: 3\nI0701 11:09:55.367954 521 log.go:172] (0xc000a8c0b0) Reply frame received for 3\nI0701 11:09:55.368002 521 log.go:172] (0xc000a8c0b0) (0xc000695540) Create stream\nI0701 11:09:55.368026 521 log.go:172] (0xc000a8c0b0) (0xc000695540) Stream added, broadcasting: 5\nI0701 11:09:55.369068 521 log.go:172] (0xc000a8c0b0) Reply frame received for 5\nI0701 11:09:55.437011 521 log.go:172] (0xc000a8c0b0) Data frame received for 5\nI0701 11:09:55.437055 521 log.go:172] (0xc000695540) (5) Data frame handling\nI0701 11:09:55.437067 521 log.go:172] (0xc000695540) (5) Data frame sent\nI0701 11:09:55.437077 521 log.go:172] (0xc000a8c0b0) Data frame received for 5\nI0701 11:09:55.437088 521 log.go:172] (0xc000695540) (5) Data frame handling\n+ nc -zv -t -w 2 10.97.150.132 80\nConnection to 10.97.150.132 80 port [tcp/http] succeeded!\nI0701 11:09:55.437256 521 log.go:172] (0xc000a8c0b0) Data frame received for 3\nI0701 11:09:55.437274 521 log.go:172] (0xc0006954a0) (3) Data frame handling\nI0701 11:09:55.438636 521 log.go:172] (0xc000a8c0b0) Data frame received for 1\nI0701 11:09:55.438726 521 log.go:172] (0xc000408d20) (1) Data frame handling\nI0701 11:09:55.438755 521 log.go:172] (0xc000408d20) (1) Data frame sent\nI0701 11:09:55.438771 521 log.go:172] (0xc000a8c0b0) (0xc000408d20) Stream removed, broadcasting: 1\nI0701 11:09:55.438794 521 log.go:172] (0xc000a8c0b0) Go away received\nI0701 11:09:55.439189 521 log.go:172] (0xc000a8c0b0) (0xc000408d20) Stream removed, broadcasting: 1\nI0701 11:09:55.439212 521 log.go:172] (0xc000a8c0b0) (0xc0006954a0) Stream removed, broadcasting: 3\nI0701 11:09:55.439224 521 log.go:172] (0xc000a8c0b0) (0xc000695540) Stream removed, broadcasting: 5\n" Jul 1 11:09:55.446: INFO: stdout: "" Jul 1 11:09:55.446: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-1049 execpoddnn8b -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 32713' Jul 1 11:09:55.658: INFO: stderr: "I0701 11:09:55.571072 542 log.go:172] (0xc00003b8c0) (0xc0005f94a0) Create stream\nI0701 11:09:55.571123 542 log.go:172] (0xc00003b8c0) (0xc0005f94a0) Stream added, broadcasting: 1\nI0701 11:09:55.574180 542 log.go:172] (0xc00003b8c0) Reply frame received for 1\nI0701 11:09:55.574229 542 log.go:172] (0xc00003b8c0) (0xc000908000) Create stream\nI0701 11:09:55.574251 542 log.go:172] (0xc00003b8c0) (0xc000908000) Stream added, broadcasting: 3\nI0701 11:09:55.575243 542 log.go:172] (0xc00003b8c0) Reply frame received for 3\nI0701 11:09:55.575284 542 log.go:172] (0xc00003b8c0) (0xc0008cc000) Create stream\nI0701 11:09:55.575301 542 log.go:172] (0xc00003b8c0) (0xc0008cc000) Stream added, broadcasting: 5\nI0701 11:09:55.576345 542 log.go:172] (0xc00003b8c0) Reply frame received for 5\nI0701 11:09:55.648933 542 log.go:172] (0xc00003b8c0) Data frame received for 5\nI0701 11:09:55.649012 542 log.go:172] (0xc0008cc000) (5) Data frame handling\nI0701 11:09:55.649031 542 log.go:172] (0xc0008cc000) (5) Data frame sent\nI0701 11:09:55.649045 542 log.go:172] (0xc00003b8c0) Data frame received for 5\nI0701 11:09:55.649053 542 log.go:172] (0xc0008cc000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.15 32713\nConnection to 172.17.0.15 32713 port [tcp/32713] succeeded!\nI0701 11:09:55.649068 542 log.go:172] (0xc00003b8c0) Data frame received for 3\nI0701 11:09:55.649080 542 log.go:172] (0xc000908000) (3) Data frame handling\nI0701 11:09:55.650921 542 log.go:172] (0xc00003b8c0) Data frame received for 1\nI0701 11:09:55.650949 542 log.go:172] (0xc0005f94a0) (1) Data frame handling\nI0701 11:09:55.650962 542 log.go:172] (0xc0005f94a0) (1) Data frame sent\nI0701 11:09:55.650975 542 log.go:172] (0xc00003b8c0) (0xc0005f94a0) Stream removed, broadcasting: 1\nI0701 11:09:55.650997 542 log.go:172] (0xc00003b8c0) Go away received\nI0701 11:09:55.651346 542 log.go:172] (0xc00003b8c0) (0xc0005f94a0) Stream removed, broadcasting: 1\nI0701 11:09:55.651367 542 log.go:172] (0xc00003b8c0) (0xc000908000) Stream removed, broadcasting: 3\nI0701 11:09:55.651379 542 log.go:172] (0xc00003b8c0) (0xc0008cc000) Stream removed, broadcasting: 5\n" Jul 1 11:09:55.658: INFO: stdout: "" Jul 1 11:09:55.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-1049 execpoddnn8b -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 32713' Jul 1 11:09:55.883: INFO: stderr: "I0701 11:09:55.796713 565 log.go:172] (0xc000baba20) (0xc000b78b40) Create stream\nI0701 11:09:55.796780 565 log.go:172] (0xc000baba20) (0xc000b78b40) Stream added, broadcasting: 1\nI0701 11:09:55.802188 565 log.go:172] (0xc000baba20) Reply frame received for 1\nI0701 11:09:55.802246 565 log.go:172] (0xc000baba20) (0xc000643540) Create stream\nI0701 11:09:55.802265 565 log.go:172] (0xc000baba20) (0xc000643540) Stream added, broadcasting: 3\nI0701 11:09:55.803406 565 log.go:172] (0xc000baba20) Reply frame received for 3\nI0701 11:09:55.803446 565 log.go:172] (0xc000baba20) (0xc0004f0960) Create stream\nI0701 11:09:55.803459 565 log.go:172] (0xc000baba20) (0xc0004f0960) Stream added, broadcasting: 5\nI0701 11:09:55.804460 565 log.go:172] (0xc000baba20) Reply frame received for 5\nI0701 11:09:55.874005 565 log.go:172] (0xc000baba20) Data frame received for 5\nI0701 11:09:55.874049 565 log.go:172] (0xc0004f0960) (5) Data frame handling\nI0701 11:09:55.874084 565 log.go:172] (0xc0004f0960) (5) Data frame sent\nI0701 11:09:55.874112 565 log.go:172] (0xc000baba20) Data frame received for 5\nI0701 11:09:55.874125 565 log.go:172] (0xc0004f0960) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 32713\nConnection to 172.17.0.18 32713 port [tcp/32713] succeeded!\nI0701 11:09:55.874158 565 log.go:172] (0xc0004f0960) (5) Data frame sent\nI0701 11:09:55.874415 565 log.go:172] (0xc000baba20) Data frame received for 5\nI0701 11:09:55.874439 565 log.go:172] (0xc0004f0960) (5) Data frame handling\nI0701 11:09:55.874487 565 log.go:172] (0xc000baba20) Data frame received for 3\nI0701 11:09:55.874509 565 log.go:172] (0xc000643540) (3) Data frame handling\nI0701 11:09:55.875612 565 log.go:172] (0xc000baba20) Data frame received for 1\nI0701 11:09:55.875717 565 log.go:172] (0xc000b78b40) (1) Data frame handling\nI0701 11:09:55.875759 565 log.go:172] (0xc000b78b40) (1) Data frame sent\nI0701 11:09:55.875784 565 log.go:172] (0xc000baba20) (0xc000b78b40) Stream removed, broadcasting: 1\nI0701 11:09:55.875809 565 log.go:172] (0xc000baba20) Go away received\nI0701 11:09:55.876209 565 log.go:172] (0xc000baba20) (0xc000b78b40) Stream removed, broadcasting: 1\nI0701 11:09:55.876232 565 log.go:172] (0xc000baba20) (0xc000643540) Stream removed, broadcasting: 3\nI0701 11:09:55.876244 565 log.go:172] (0xc000baba20) (0xc0004f0960) Stream removed, broadcasting: 5\n" Jul 1 11:09:55.884: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:09:55.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1049" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:15.327 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":46,"skipped":681,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:09:55.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-301deed4-30e0-49f0-9e32-2cf58590cd0a STEP: Creating a pod to test consume configMaps Jul 1 11:09:55.971: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-33fb1575-b2de-4ec0-8ecc-93c3d2076a47" in namespace "projected-6714" to be "Succeeded or Failed" Jul 1 11:09:56.033: INFO: Pod "pod-projected-configmaps-33fb1575-b2de-4ec0-8ecc-93c3d2076a47": Phase="Pending", Reason="", readiness=false. Elapsed: 61.564975ms Jul 1 11:09:58.142: INFO: Pod "pod-projected-configmaps-33fb1575-b2de-4ec0-8ecc-93c3d2076a47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170176554s Jul 1 11:10:00.146: INFO: Pod "pod-projected-configmaps-33fb1575-b2de-4ec0-8ecc-93c3d2076a47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.174440229s STEP: Saw pod success Jul 1 11:10:00.146: INFO: Pod "pod-projected-configmaps-33fb1575-b2de-4ec0-8ecc-93c3d2076a47" satisfied condition "Succeeded or Failed" Jul 1 11:10:00.149: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-33fb1575-b2de-4ec0-8ecc-93c3d2076a47 container projected-configmap-volume-test: STEP: delete the pod Jul 1 11:10:00.218: INFO: Waiting for pod pod-projected-configmaps-33fb1575-b2de-4ec0-8ecc-93c3d2076a47 to disappear Jul 1 11:10:00.235: INFO: Pod pod-projected-configmaps-33fb1575-b2de-4ec0-8ecc-93c3d2076a47 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 1 11:10:00.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6714" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":691,"failed":0} S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 1 11:10:00.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 1 11:10:00.602: INFO: (0) /api/v1/nodes/kali-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's command
Jul  1 11:10:00.780: INFO: Waiting up to 5m0s for pod "var-expansion-7fbb5a55-f24c-46a3-996b-c4907c3bd356" in namespace "var-expansion-1554" to be "Succeeded or Failed"
Jul  1 11:10:00.810: INFO: Pod "var-expansion-7fbb5a55-f24c-46a3-996b-c4907c3bd356": Phase="Pending", Reason="", readiness=false. Elapsed: 30.461377ms
Jul  1 11:10:03.051: INFO: Pod "var-expansion-7fbb5a55-f24c-46a3-996b-c4907c3bd356": Phase="Pending", Reason="", readiness=false. Elapsed: 2.271278582s
Jul  1 11:10:05.056: INFO: Pod "var-expansion-7fbb5a55-f24c-46a3-996b-c4907c3bd356": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.275996919s
STEP: Saw pod success
Jul  1 11:10:05.056: INFO: Pod "var-expansion-7fbb5a55-f24c-46a3-996b-c4907c3bd356" satisfied condition "Succeeded or Failed"
Jul  1 11:10:05.059: INFO: Trying to get logs from node kali-worker pod var-expansion-7fbb5a55-f24c-46a3-996b-c4907c3bd356 container dapi-container: 
STEP: delete the pod
Jul  1 11:10:05.163: INFO: Waiting for pod var-expansion-7fbb5a55-f24c-46a3-996b-c4907c3bd356 to disappear
Jul  1 11:10:05.243: INFO: Pod var-expansion-7fbb5a55-f24c-46a3-996b-c4907c3bd356 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:10:05.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1554" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":49,"skipped":702,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:10:05.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul  1 11:10:05.509: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb966b37-97df-467c-815a-53b397646de4" in namespace "downward-api-2523" to be "Succeeded or Failed"
Jul  1 11:10:05.615: INFO: Pod "downwardapi-volume-fb966b37-97df-467c-815a-53b397646de4": Phase="Pending", Reason="", readiness=false. Elapsed: 106.180047ms
Jul  1 11:10:07.619: INFO: Pod "downwardapi-volume-fb966b37-97df-467c-815a-53b397646de4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109720256s
Jul  1 11:10:09.624: INFO: Pod "downwardapi-volume-fb966b37-97df-467c-815a-53b397646de4": Phase="Running", Reason="", readiness=true. Elapsed: 4.114946852s
Jul  1 11:10:11.629: INFO: Pod "downwardapi-volume-fb966b37-97df-467c-815a-53b397646de4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.120022861s
STEP: Saw pod success
Jul  1 11:10:11.629: INFO: Pod "downwardapi-volume-fb966b37-97df-467c-815a-53b397646de4" satisfied condition "Succeeded or Failed"
Jul  1 11:10:11.632: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-fb966b37-97df-467c-815a-53b397646de4 container client-container: 
STEP: delete the pod
Jul  1 11:10:11.683: INFO: Waiting for pod downwardapi-volume-fb966b37-97df-467c-815a-53b397646de4 to disappear
Jul  1 11:10:11.697: INFO: Pod downwardapi-volume-fb966b37-97df-467c-815a-53b397646de4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:10:11.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2523" for this suite.

• [SLOW TEST:6.455 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":719,"failed":0}
SS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:10:11.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jul  1 11:10:29.908: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-12 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 11:10:29.908: INFO: >>> kubeConfig: /root/.kube/config
I0701 11:10:29.946844       7 log.go:172] (0xc0027ca000) (0xc001123ea0) Create stream
I0701 11:10:29.946876       7 log.go:172] (0xc0027ca000) (0xc001123ea0) Stream added, broadcasting: 1
I0701 11:10:29.949281       7 log.go:172] (0xc0027ca000) Reply frame received for 1
I0701 11:10:29.949327       7 log.go:172] (0xc0027ca000) (0xc001e74000) Create stream
I0701 11:10:29.949338       7 log.go:172] (0xc0027ca000) (0xc001e74000) Stream added, broadcasting: 3
I0701 11:10:29.950371       7 log.go:172] (0xc0027ca000) Reply frame received for 3
I0701 11:10:29.950399       7 log.go:172] (0xc0027ca000) (0xc001638dc0) Create stream
I0701 11:10:29.950410       7 log.go:172] (0xc0027ca000) (0xc001638dc0) Stream added, broadcasting: 5
I0701 11:10:29.951323       7 log.go:172] (0xc0027ca000) Reply frame received for 5
I0701 11:10:30.042414       7 log.go:172] (0xc0027ca000) Data frame received for 5
I0701 11:10:30.042438       7 log.go:172] (0xc001638dc0) (5) Data frame handling
I0701 11:10:30.042474       7 log.go:172] (0xc0027ca000) Data frame received for 3
I0701 11:10:30.042553       7 log.go:172] (0xc001e74000) (3) Data frame handling
I0701 11:10:30.042575       7 log.go:172] (0xc001e74000) (3) Data frame sent
I0701 11:10:30.042676       7 log.go:172] (0xc0027ca000) Data frame received for 3
I0701 11:10:30.042691       7 log.go:172] (0xc001e74000) (3) Data frame handling
I0701 11:10:30.044051       7 log.go:172] (0xc0027ca000) Data frame received for 1
I0701 11:10:30.044096       7 log.go:172] (0xc001123ea0) (1) Data frame handling
I0701 11:10:30.044122       7 log.go:172] (0xc001123ea0) (1) Data frame sent
I0701 11:10:30.044140       7 log.go:172] (0xc0027ca000) (0xc001123ea0) Stream removed, broadcasting: 1
I0701 11:10:30.044226       7 log.go:172] (0xc0027ca000) Go away received
I0701 11:10:30.044625       7 log.go:172] (0xc0027ca000) (0xc001123ea0) Stream removed, broadcasting: 1
I0701 11:10:30.044642       7 log.go:172] (0xc0027ca000) (0xc001e74000) Stream removed, broadcasting: 3
I0701 11:10:30.044650       7 log.go:172] (0xc0027ca000) (0xc001638dc0) Stream removed, broadcasting: 5
Jul  1 11:10:30.044: INFO: Exec stderr: ""
Jul  1 11:10:30.044: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-12 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 11:10:30.044: INFO: >>> kubeConfig: /root/.kube/config
I0701 11:10:30.092622       7 log.go:172] (0xc002eca580) (0xc001df8320) Create stream
I0701 11:10:30.092647       7 log.go:172] (0xc002eca580) (0xc001df8320) Stream added, broadcasting: 1
I0701 11:10:30.094512       7 log.go:172] (0xc002eca580) Reply frame received for 1
I0701 11:10:30.094562       7 log.go:172] (0xc002eca580) (0xc0016c01e0) Create stream
I0701 11:10:30.094584       7 log.go:172] (0xc002eca580) (0xc0016c01e0) Stream added, broadcasting: 3
I0701 11:10:30.095396       7 log.go:172] (0xc002eca580) Reply frame received for 3
I0701 11:10:30.095432       7 log.go:172] (0xc002eca580) (0xc0016d6be0) Create stream
I0701 11:10:30.095440       7 log.go:172] (0xc002eca580) (0xc0016d6be0) Stream added, broadcasting: 5
I0701 11:10:30.096162       7 log.go:172] (0xc002eca580) Reply frame received for 5
I0701 11:10:30.166874       7 log.go:172] (0xc002eca580) Data frame received for 3
I0701 11:10:30.166900       7 log.go:172] (0xc0016c01e0) (3) Data frame handling
I0701 11:10:30.166921       7 log.go:172] (0xc0016c01e0) (3) Data frame sent
I0701 11:10:30.166930       7 log.go:172] (0xc002eca580) Data frame received for 3
I0701 11:10:30.166938       7 log.go:172] (0xc0016c01e0) (3) Data frame handling
I0701 11:10:30.167073       7 log.go:172] (0xc002eca580) Data frame received for 5
I0701 11:10:30.167107       7 log.go:172] (0xc0016d6be0) (5) Data frame handling
I0701 11:10:30.168735       7 log.go:172] (0xc002eca580) Data frame received for 1
I0701 11:10:30.168749       7 log.go:172] (0xc001df8320) (1) Data frame handling
I0701 11:10:30.168756       7 log.go:172] (0xc001df8320) (1) Data frame sent
I0701 11:10:30.168765       7 log.go:172] (0xc002eca580) (0xc001df8320) Stream removed, broadcasting: 1
I0701 11:10:30.168836       7 log.go:172] (0xc002eca580) Go away received
I0701 11:10:30.168890       7 log.go:172] (0xc002eca580) (0xc001df8320) Stream removed, broadcasting: 1
I0701 11:10:30.168920       7 log.go:172] (0xc002eca580) (0xc0016c01e0) Stream removed, broadcasting: 3
I0701 11:10:30.168933       7 log.go:172] (0xc002eca580) (0xc0016d6be0) Stream removed, broadcasting: 5
Jul  1 11:10:30.168: INFO: Exec stderr: ""
Jul  1 11:10:30.168: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-12 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 11:10:30.168: INFO: >>> kubeConfig: /root/.kube/config
I0701 11:10:30.198093       7 log.go:172] (0xc0027ca630) (0xc001e74280) Create stream
I0701 11:10:30.198134       7 log.go:172] (0xc0027ca630) (0xc001e74280) Stream added, broadcasting: 1
I0701 11:10:30.200360       7 log.go:172] (0xc0027ca630) Reply frame received for 1
I0701 11:10:30.200393       7 log.go:172] (0xc0027ca630) (0xc0016d6dc0) Create stream
I0701 11:10:30.200411       7 log.go:172] (0xc0027ca630) (0xc0016d6dc0) Stream added, broadcasting: 3
I0701 11:10:30.201898       7 log.go:172] (0xc0027ca630) Reply frame received for 3
I0701 11:10:30.201956       7 log.go:172] (0xc0027ca630) (0xc001df8500) Create stream
I0701 11:10:30.201980       7 log.go:172] (0xc0027ca630) (0xc001df8500) Stream added, broadcasting: 5
I0701 11:10:30.202912       7 log.go:172] (0xc0027ca630) Reply frame received for 5
I0701 11:10:30.273439       7 log.go:172] (0xc0027ca630) Data frame received for 5
I0701 11:10:30.273494       7 log.go:172] (0xc001df8500) (5) Data frame handling
I0701 11:10:30.273557       7 log.go:172] (0xc0027ca630) Data frame received for 3
I0701 11:10:30.273589       7 log.go:172] (0xc0016d6dc0) (3) Data frame handling
I0701 11:10:30.273611       7 log.go:172] (0xc0016d6dc0) (3) Data frame sent
I0701 11:10:30.273621       7 log.go:172] (0xc0027ca630) Data frame received for 3
I0701 11:10:30.273627       7 log.go:172] (0xc0016d6dc0) (3) Data frame handling
I0701 11:10:30.274848       7 log.go:172] (0xc0027ca630) Data frame received for 1
I0701 11:10:30.274871       7 log.go:172] (0xc001e74280) (1) Data frame handling
I0701 11:10:30.274891       7 log.go:172] (0xc001e74280) (1) Data frame sent
I0701 11:10:30.274912       7 log.go:172] (0xc0027ca630) (0xc001e74280) Stream removed, broadcasting: 1
I0701 11:10:30.274936       7 log.go:172] (0xc0027ca630) Go away received
I0701 11:10:30.275002       7 log.go:172] (0xc0027ca630) (0xc001e74280) Stream removed, broadcasting: 1
I0701 11:10:30.275019       7 log.go:172] (0xc0027ca630) (0xc0016d6dc0) Stream removed, broadcasting: 3
I0701 11:10:30.275028       7 log.go:172] (0xc0027ca630) (0xc001df8500) Stream removed, broadcasting: 5
Jul  1 11:10:30.275: INFO: Exec stderr: ""
Jul  1 11:10:30.275: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-12 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 11:10:30.275: INFO: >>> kubeConfig: /root/.kube/config
I0701 11:10:30.305795       7 log.go:172] (0xc0027caf20) (0xc001e74640) Create stream
I0701 11:10:30.305834       7 log.go:172] (0xc0027caf20) (0xc001e74640) Stream added, broadcasting: 1
I0701 11:10:30.308082       7 log.go:172] (0xc0027caf20) Reply frame received for 1
I0701 11:10:30.308243       7 log.go:172] (0xc0027caf20) (0xc001df85a0) Create stream
I0701 11:10:30.308308       7 log.go:172] (0xc0027caf20) (0xc001df85a0) Stream added, broadcasting: 3
I0701 11:10:30.309570       7 log.go:172] (0xc0027caf20) Reply frame received for 3
I0701 11:10:30.309605       7 log.go:172] (0xc0027caf20) (0xc001638f00) Create stream
I0701 11:10:30.309620       7 log.go:172] (0xc0027caf20) (0xc001638f00) Stream added, broadcasting: 5
I0701 11:10:30.310632       7 log.go:172] (0xc0027caf20) Reply frame received for 5
I0701 11:10:30.377786       7 log.go:172] (0xc0027caf20) Data frame received for 3
I0701 11:10:30.377826       7 log.go:172] (0xc001df85a0) (3) Data frame handling
I0701 11:10:30.377852       7 log.go:172] (0xc001df85a0) (3) Data frame sent
I0701 11:10:30.377961       7 log.go:172] (0xc0027caf20) Data frame received for 5
I0701 11:10:30.378006       7 log.go:172] (0xc001638f00) (5) Data frame handling
I0701 11:10:30.378035       7 log.go:172] (0xc0027caf20) Data frame received for 3
I0701 11:10:30.378050       7 log.go:172] (0xc001df85a0) (3) Data frame handling
I0701 11:10:30.379219       7 log.go:172] (0xc0027caf20) Data frame received for 1
I0701 11:10:30.379254       7 log.go:172] (0xc001e74640) (1) Data frame handling
I0701 11:10:30.379283       7 log.go:172] (0xc001e74640) (1) Data frame sent
I0701 11:10:30.379309       7 log.go:172] (0xc0027caf20) (0xc001e74640) Stream removed, broadcasting: 1
I0701 11:10:30.379337       7 log.go:172] (0xc0027caf20) Go away received
I0701 11:10:30.379433       7 log.go:172] (0xc0027caf20) (0xc001e74640) Stream removed, broadcasting: 1
I0701 11:10:30.379454       7 log.go:172] (0xc0027caf20) (0xc001df85a0) Stream removed, broadcasting: 3
I0701 11:10:30.379469       7 log.go:172] (0xc0027caf20) (0xc001638f00) Stream removed, broadcasting: 5
Jul  1 11:10:30.379: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jul  1 11:10:30.379: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-12 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 11:10:30.379: INFO: >>> kubeConfig: /root/.kube/config
I0701 11:10:30.418645       7 log.go:172] (0xc002ecabb0) (0xc001df88c0) Create stream
I0701 11:10:30.418673       7 log.go:172] (0xc002ecabb0) (0xc001df88c0) Stream added, broadcasting: 1
I0701 11:10:30.421835       7 log.go:172] (0xc002ecabb0) Reply frame received for 1
I0701 11:10:30.421878       7 log.go:172] (0xc002ecabb0) (0xc001df8a00) Create stream
I0701 11:10:30.421895       7 log.go:172] (0xc002ecabb0) (0xc001df8a00) Stream added, broadcasting: 3
I0701 11:10:30.422902       7 log.go:172] (0xc002ecabb0) Reply frame received for 3
I0701 11:10:30.422946       7 log.go:172] (0xc002ecabb0) (0xc001638fa0) Create stream
I0701 11:10:30.422961       7 log.go:172] (0xc002ecabb0) (0xc001638fa0) Stream added, broadcasting: 5
I0701 11:10:30.423964       7 log.go:172] (0xc002ecabb0) Reply frame received for 5
I0701 11:10:30.501454       7 log.go:172] (0xc002ecabb0) Data frame received for 3
I0701 11:10:30.501478       7 log.go:172] (0xc001df8a00) (3) Data frame handling
I0701 11:10:30.501488       7 log.go:172] (0xc001df8a00) (3) Data frame sent
I0701 11:10:30.501496       7 log.go:172] (0xc002ecabb0) Data frame received for 3
I0701 11:10:30.501500       7 log.go:172] (0xc001df8a00) (3) Data frame handling
I0701 11:10:30.501539       7 log.go:172] (0xc002ecabb0) Data frame received for 5
I0701 11:10:30.501547       7 log.go:172] (0xc001638fa0) (5) Data frame handling
I0701 11:10:30.502944       7 log.go:172] (0xc002ecabb0) Data frame received for 1
I0701 11:10:30.502986       7 log.go:172] (0xc001df88c0) (1) Data frame handling
I0701 11:10:30.503017       7 log.go:172] (0xc001df88c0) (1) Data frame sent
I0701 11:10:30.503042       7 log.go:172] (0xc002ecabb0) (0xc001df88c0) Stream removed, broadcasting: 1
I0701 11:10:30.503063       7 log.go:172] (0xc002ecabb0) Go away received
I0701 11:10:30.503191       7 log.go:172] (0xc002ecabb0) (0xc001df88c0) Stream removed, broadcasting: 1
I0701 11:10:30.503213       7 log.go:172] (0xc002ecabb0) (0xc001df8a00) Stream removed, broadcasting: 3
I0701 11:10:30.503229       7 log.go:172] (0xc002ecabb0) (0xc001638fa0) Stream removed, broadcasting: 5
Jul  1 11:10:30.503: INFO: Exec stderr: ""
Jul  1 11:10:30.503: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-12 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 11:10:30.503: INFO: >>> kubeConfig: /root/.kube/config
I0701 11:10:30.534519       7 log.go:172] (0xc002ecb1e0) (0xc001df8f00) Create stream
I0701 11:10:30.534552       7 log.go:172] (0xc002ecb1e0) (0xc001df8f00) Stream added, broadcasting: 1
I0701 11:10:30.536874       7 log.go:172] (0xc002ecb1e0) Reply frame received for 1
I0701 11:10:30.536921       7 log.go:172] (0xc002ecb1e0) (0xc001df8fa0) Create stream
I0701 11:10:30.536940       7 log.go:172] (0xc002ecb1e0) (0xc001df8fa0) Stream added, broadcasting: 3
I0701 11:10:30.538593       7 log.go:172] (0xc002ecb1e0) Reply frame received for 3
I0701 11:10:30.538625       7 log.go:172] (0xc002ecb1e0) (0xc0016d7180) Create stream
I0701 11:10:30.538634       7 log.go:172] (0xc002ecb1e0) (0xc0016d7180) Stream added, broadcasting: 5
I0701 11:10:30.539531       7 log.go:172] (0xc002ecb1e0) Reply frame received for 5
I0701 11:10:30.613845       7 log.go:172] (0xc002ecb1e0) Data frame received for 5
I0701 11:10:30.614109       7 log.go:172] (0xc0016d7180) (5) Data frame handling
I0701 11:10:30.614156       7 log.go:172] (0xc002ecb1e0) Data frame received for 3
I0701 11:10:30.614183       7 log.go:172] (0xc001df8fa0) (3) Data frame handling
I0701 11:10:30.614213       7 log.go:172] (0xc001df8fa0) (3) Data frame sent
I0701 11:10:30.614237       7 log.go:172] (0xc002ecb1e0) Data frame received for 3
I0701 11:10:30.614259       7 log.go:172] (0xc001df8fa0) (3) Data frame handling
I0701 11:10:30.615511       7 log.go:172] (0xc002ecb1e0) Data frame received for 1
I0701 11:10:30.615544       7 log.go:172] (0xc001df8f00) (1) Data frame handling
I0701 11:10:30.615579       7 log.go:172] (0xc001df8f00) (1) Data frame sent
I0701 11:10:30.615600       7 log.go:172] (0xc002ecb1e0) (0xc001df8f00) Stream removed, broadcasting: 1
I0701 11:10:30.615737       7 log.go:172] (0xc002ecb1e0) (0xc001df8f00) Stream removed, broadcasting: 1
I0701 11:10:30.615763       7 log.go:172] (0xc002ecb1e0) (0xc001df8fa0) Stream removed, broadcasting: 3
I0701 11:10:30.615840       7 log.go:172] (0xc002ecb1e0) Go away received
I0701 11:10:30.616034       7 log.go:172] (0xc002ecb1e0) (0xc0016d7180) Stream removed, broadcasting: 5
Jul  1 11:10:30.616: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jul  1 11:10:30.616: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-12 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 11:10:30.616: INFO: >>> kubeConfig: /root/.kube/config
I0701 11:10:30.656924       7 log.go:172] (0xc002ecb810) (0xc001df92c0) Create stream
I0701 11:10:30.656956       7 log.go:172] (0xc002ecb810) (0xc001df92c0) Stream added, broadcasting: 1
I0701 11:10:30.659644       7 log.go:172] (0xc002ecb810) Reply frame received for 1
I0701 11:10:30.659831       7 log.go:172] (0xc002ecb810) (0xc001df9540) Create stream
I0701 11:10:30.659858       7 log.go:172] (0xc002ecb810) (0xc001df9540) Stream added, broadcasting: 3
I0701 11:10:30.661004       7 log.go:172] (0xc002ecb810) Reply frame received for 3
I0701 11:10:30.661047       7 log.go:172] (0xc002ecb810) (0xc001e746e0) Create stream
I0701 11:10:30.661062       7 log.go:172] (0xc002ecb810) (0xc001e746e0) Stream added, broadcasting: 5
I0701 11:10:30.662450       7 log.go:172] (0xc002ecb810) Reply frame received for 5
I0701 11:10:30.751221       7 log.go:172] (0xc002ecb810) Data frame received for 5
I0701 11:10:30.751266       7 log.go:172] (0xc001e746e0) (5) Data frame handling
I0701 11:10:30.751302       7 log.go:172] (0xc002ecb810) Data frame received for 3
I0701 11:10:30.751341       7 log.go:172] (0xc001df9540) (3) Data frame handling
I0701 11:10:30.751387       7 log.go:172] (0xc001df9540) (3) Data frame sent
I0701 11:10:30.751414       7 log.go:172] (0xc002ecb810) Data frame received for 3
I0701 11:10:30.751435       7 log.go:172] (0xc001df9540) (3) Data frame handling
I0701 11:10:30.753837       7 log.go:172] (0xc002ecb810) Data frame received for 1
I0701 11:10:30.753875       7 log.go:172] (0xc001df92c0) (1) Data frame handling
I0701 11:10:30.753898       7 log.go:172] (0xc001df92c0) (1) Data frame sent
I0701 11:10:30.753920       7 log.go:172] (0xc002ecb810) (0xc001df92c0) Stream removed, broadcasting: 1
I0701 11:10:30.753975       7 log.go:172] (0xc002ecb810) Go away received
I0701 11:10:30.754073       7 log.go:172] (0xc002ecb810) (0xc001df92c0) Stream removed, broadcasting: 1
I0701 11:10:30.754090       7 log.go:172] (0xc002ecb810) (0xc001df9540) Stream removed, broadcasting: 3
I0701 11:10:30.754102       7 log.go:172] (0xc002ecb810) (0xc001e746e0) Stream removed, broadcasting: 5
Jul  1 11:10:30.754: INFO: Exec stderr: ""
Jul  1 11:10:30.754: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-12 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 11:10:30.754: INFO: >>> kubeConfig: /root/.kube/config
I0701 11:10:30.790190       7 log.go:172] (0xc002caa6e0) (0xc001639400) Create stream
I0701 11:10:30.790224       7 log.go:172] (0xc002caa6e0) (0xc001639400) Stream added, broadcasting: 1
I0701 11:10:30.792733       7 log.go:172] (0xc002caa6e0) Reply frame received for 1
I0701 11:10:30.792779       7 log.go:172] (0xc002caa6e0) (0xc0016d7540) Create stream
I0701 11:10:30.792795       7 log.go:172] (0xc002caa6e0) (0xc0016d7540) Stream added, broadcasting: 3
I0701 11:10:30.793902       7 log.go:172] (0xc002caa6e0) Reply frame received for 3
I0701 11:10:30.793951       7 log.go:172] (0xc002caa6e0) (0xc0016c05a0) Create stream
I0701 11:10:30.793963       7 log.go:172] (0xc002caa6e0) (0xc0016c05a0) Stream added, broadcasting: 5
I0701 11:10:30.794780       7 log.go:172] (0xc002caa6e0) Reply frame received for 5
I0701 11:10:30.854281       7 log.go:172] (0xc002caa6e0) Data frame received for 5
I0701 11:10:30.854329       7 log.go:172] (0xc0016c05a0) (5) Data frame handling
I0701 11:10:30.854371       7 log.go:172] (0xc002caa6e0) Data frame received for 3
I0701 11:10:30.854392       7 log.go:172] (0xc0016d7540) (3) Data frame handling
I0701 11:10:30.854417       7 log.go:172] (0xc0016d7540) (3) Data frame sent
I0701 11:10:30.854436       7 log.go:172] (0xc002caa6e0) Data frame received for 3
I0701 11:10:30.854453       7 log.go:172] (0xc0016d7540) (3) Data frame handling
I0701 11:10:30.855547       7 log.go:172] (0xc002caa6e0) Data frame received for 1
I0701 11:10:30.855569       7 log.go:172] (0xc001639400) (1) Data frame handling
I0701 11:10:30.855585       7 log.go:172] (0xc001639400) (1) Data frame sent
I0701 11:10:30.855606       7 log.go:172] (0xc002caa6e0) (0xc001639400) Stream removed, broadcasting: 1
I0701 11:10:30.855718       7 log.go:172] (0xc002caa6e0) Go away received
I0701 11:10:30.855757       7 log.go:172] (0xc002caa6e0) (0xc001639400) Stream removed, broadcasting: 1
I0701 11:10:30.855779       7 log.go:172] (0xc002caa6e0) (0xc0016d7540) Stream removed, broadcasting: 3
I0701 11:10:30.855791       7 log.go:172] (0xc002caa6e0) (0xc0016c05a0) Stream removed, broadcasting: 5
Jul  1 11:10:30.855: INFO: Exec stderr: ""
Jul  1 11:10:30.855: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-12 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 11:10:30.855: INFO: >>> kubeConfig: /root/.kube/config
I0701 11:10:30.886449       7 log.go:172] (0xc002ecbd90) (0xc001df9720) Create stream
I0701 11:10:30.886475       7 log.go:172] (0xc002ecbd90) (0xc001df9720) Stream added, broadcasting: 1
I0701 11:10:30.888487       7 log.go:172] (0xc002ecbd90) Reply frame received for 1
I0701 11:10:30.888537       7 log.go:172] (0xc002ecbd90) (0xc001639680) Create stream
I0701 11:10:30.888556       7 log.go:172] (0xc002ecbd90) (0xc001639680) Stream added, broadcasting: 3
I0701 11:10:30.889757       7 log.go:172] (0xc002ecbd90) Reply frame received for 3
I0701 11:10:30.889784       7 log.go:172] (0xc002ecbd90) (0xc001df97c0) Create stream
I0701 11:10:30.889793       7 log.go:172] (0xc002ecbd90) (0xc001df97c0) Stream added, broadcasting: 5
I0701 11:10:30.890689       7 log.go:172] (0xc002ecbd90) Reply frame received for 5
I0701 11:10:30.949540       7 log.go:172] (0xc002ecbd90) Data frame received for 5
I0701 11:10:30.949566       7 log.go:172] (0xc001df97c0) (5) Data frame handling
I0701 11:10:30.949591       7 log.go:172] (0xc002ecbd90) Data frame received for 3
I0701 11:10:30.949602       7 log.go:172] (0xc001639680) (3) Data frame handling
I0701 11:10:30.949619       7 log.go:172] (0xc001639680) (3) Data frame sent
I0701 11:10:30.949633       7 log.go:172] (0xc002ecbd90) Data frame received for 3
I0701 11:10:30.949641       7 log.go:172] (0xc001639680) (3) Data frame handling
I0701 11:10:30.951428       7 log.go:172] (0xc002ecbd90) Data frame received for 1
I0701 11:10:30.951439       7 log.go:172] (0xc001df9720) (1) Data frame handling
I0701 11:10:30.951445       7 log.go:172] (0xc001df9720) (1) Data frame sent
I0701 11:10:30.951456       7 log.go:172] (0xc002ecbd90) (0xc001df9720) Stream removed, broadcasting: 1
I0701 11:10:30.951541       7 log.go:172] (0xc002ecbd90) (0xc001df9720) Stream removed, broadcasting: 1
I0701 11:10:30.951552       7 log.go:172] (0xc002ecbd90) (0xc001639680) Stream removed, broadcasting: 3
I0701 11:10:30.951557       7 log.go:172] (0xc002ecbd90) (0xc001df97c0) Stream removed, broadcasting: 5
Jul  1 11:10:30.951: INFO: Exec stderr: ""
Jul  1 11:10:30.951: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-12 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 11:10:30.951: INFO: >>> kubeConfig: /root/.kube/config
I0701 11:10:30.951641       7 log.go:172] (0xc002ecbd90) Go away received
I0701 11:10:30.981647       7 log.go:172] (0xc002caad10) (0xc001639860) Create stream
I0701 11:10:30.981680       7 log.go:172] (0xc002caad10) (0xc001639860) Stream added, broadcasting: 1
I0701 11:10:30.984734       7 log.go:172] (0xc002caad10) Reply frame received for 1
I0701 11:10:30.984778       7 log.go:172] (0xc002caad10) (0xc001df9860) Create stream
I0701 11:10:30.984831       7 log.go:172] (0xc002caad10) (0xc001df9860) Stream added, broadcasting: 3
I0701 11:10:30.986309       7 log.go:172] (0xc002caad10) Reply frame received for 3
I0701 11:10:30.986349       7 log.go:172] (0xc002caad10) (0xc0016c08c0) Create stream
I0701 11:10:30.986360       7 log.go:172] (0xc002caad10) (0xc0016c08c0) Stream added, broadcasting: 5
I0701 11:10:30.987417       7 log.go:172] (0xc002caad10) Reply frame received for 5
I0701 11:10:31.058712       7 log.go:172] (0xc002caad10) Data frame received for 3
I0701 11:10:31.058749       7 log.go:172] (0xc001df9860) (3) Data frame handling
I0701 11:10:31.058764       7 log.go:172] (0xc001df9860) (3) Data frame sent
I0701 11:10:31.058775       7 log.go:172] (0xc002caad10) Data frame received for 3
I0701 11:10:31.058786       7 log.go:172] (0xc001df9860) (3) Data frame handling
I0701 11:10:31.058832       7 log.go:172] (0xc002caad10) Data frame received for 5
I0701 11:10:31.058860       7 log.go:172] (0xc0016c08c0) (5) Data frame handling
I0701 11:10:31.060068       7 log.go:172] (0xc002caad10) Data frame received for 1
I0701 11:10:31.060088       7 log.go:172] (0xc001639860) (1) Data frame handling
I0701 11:10:31.060249       7 log.go:172] (0xc001639860) (1) Data frame sent
I0701 11:10:31.060265       7 log.go:172] (0xc002caad10) (0xc001639860) Stream removed, broadcasting: 1
I0701 11:10:31.060276       7 log.go:172] (0xc002caad10) Go away received
I0701 11:10:31.060382       7 log.go:172] (0xc002caad10) (0xc001639860) Stream removed, broadcasting: 1
I0701 11:10:31.060414       7 log.go:172] (0xc002caad10) (0xc001df9860) Stream removed, broadcasting: 3
I0701 11:10:31.060429       7 log.go:172] (0xc002caad10) (0xc0016c08c0) Stream removed, broadcasting: 5
Jul  1 11:10:31.060: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:10:31.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-12" for this suite.

• [SLOW TEST:19.359 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":721,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:10:31.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:10:35.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4278" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":52,"skipped":738,"failed":0}

------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:10:35.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-58fb898a-be05-4e14-8968-c4244fc04827
STEP: Creating a pod to test consume configMaps
Jul  1 11:10:35.471: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-90e52a62-84a8-45be-9afd-08ee20d2f314" in namespace "projected-9267" to be "Succeeded or Failed"
Jul  1 11:10:35.756: INFO: Pod "pod-projected-configmaps-90e52a62-84a8-45be-9afd-08ee20d2f314": Phase="Pending", Reason="", readiness=false. Elapsed: 284.991794ms
Jul  1 11:10:37.761: INFO: Pod "pod-projected-configmaps-90e52a62-84a8-45be-9afd-08ee20d2f314": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289690597s
Jul  1 11:10:39.765: INFO: Pod "pod-projected-configmaps-90e52a62-84a8-45be-9afd-08ee20d2f314": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293868875s
Jul  1 11:10:41.769: INFO: Pod "pod-projected-configmaps-90e52a62-84a8-45be-9afd-08ee20d2f314": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.297874789s
STEP: Saw pod success
Jul  1 11:10:41.769: INFO: Pod "pod-projected-configmaps-90e52a62-84a8-45be-9afd-08ee20d2f314" satisfied condition "Succeeded or Failed"
Jul  1 11:10:41.772: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-90e52a62-84a8-45be-9afd-08ee20d2f314 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  1 11:10:41.802: INFO: Waiting for pod pod-projected-configmaps-90e52a62-84a8-45be-9afd-08ee20d2f314 to disappear
Jul  1 11:10:41.809: INFO: Pod pod-projected-configmaps-90e52a62-84a8-45be-9afd-08ee20d2f314 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:10:41.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9267" for this suite.

• [SLOW TEST:6.465 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":53,"skipped":738,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:10:41.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  1 11:10:42.567: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  1 11:10:44.578: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729198642, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729198642, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729198642, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729198642, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  1 11:10:47.616: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:10:47.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1538" for this suite.
STEP: Destroying namespace "webhook-1538-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:5.974 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":54,"skipped":754,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:10:47.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul  1 11:10:47.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1244'
Jul  1 11:10:48.044: INFO: stderr: ""
Jul  1 11:10:48.044: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Jul  1 11:10:53.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-1244 -o json'
Jul  1 11:10:53.198: INFO: stderr: ""
Jul  1 11:10:53.198: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-07-01T11:10:48Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"managedFields\": [\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:labels\": {\n                            \".\": {},\n                            \"f:run\": {}\n                        }\n                    },\n                    \"f:spec\": {\n                        \"f:containers\": {\n                            \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n                                \".\": {},\n                                \"f:image\": {},\n                                \"f:imagePullPolicy\": {},\n                                \"f:name\": {},\n                                \"f:resources\": {},\n                                \"f:terminationMessagePath\": {},\n                                \"f:terminationMessagePolicy\": {}\n                            }\n                        },\n                        \"f:dnsPolicy\": {},\n                        \"f:enableServiceLinks\": {},\n                        \"f:restartPolicy\": {},\n                        \"f:schedulerName\": {},\n                        \"f:securityContext\": {},\n                        \"f:terminationGracePeriodSeconds\": {}\n                    }\n                },\n                \"manager\": \"kubectl\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-07-01T11:10:48Z\"\n            },\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:status\": {\n                        \"f:conditions\": {\n                            \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            }\n                        },\n                        \"f:containerStatuses\": {},\n                        \"f:hostIP\": {},\n                        \"f:phase\": {},\n                        \"f:podIP\": {},\n                        \"f:podIPs\": {\n                            \".\": {},\n                            \"k:{\\\"ip\\\":\\\"10.244.2.107\\\"}\": {\n                                \".\": {},\n                                \"f:ip\": {}\n                            }\n                        },\n                        \"f:startTime\": {}\n                    }\n                },\n                \"manager\": \"kubelet\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-07-01T11:10:52Z\"\n            }\n        ],\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-1244\",\n        \"resourceVersion\": \"16787930\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-1244/pods/e2e-test-httpd-pod\",\n        \"uid\": \"bef40031-f9c5-4559-b3de-518f0edd5df4\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-jtqz7\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"kali-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-jtqz7\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-jtqz7\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-01T11:10:48Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-01T11:10:52Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-01T11:10:52Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-01T11:10:48Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://ac6be063c3919748eb3443d6d9a746c0e26152b3a66eb73fc55d96937ecd99bc\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-07-01T11:10:52Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.17.0.15\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.107\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.2.107\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-07-01T11:10:48Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jul  1 11:10:53.198: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1244'
Jul  1 11:10:53.459: INFO: stderr: ""
Jul  1 11:10:53.459: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jul  1 11:10:53.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1244'
Jul  1 11:10:58.213: INFO: stderr: ""
Jul  1 11:10:58.213: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:10:58.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1244" for this suite.

• [SLOW TEST:10.430 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":275,"completed":55,"skipped":779,"failed":0}
SSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:10:58.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
STEP: reading a file in the container
Jul  1 11:11:02.852: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1038 pod-service-account-7078ac32-62e6-4c91-b3e3-3e766ecd1c77 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jul  1 11:11:03.094: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1038 pod-service-account-7078ac32-62e6-4c91-b3e3-3e766ecd1c77 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jul  1 11:11:03.298: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1038 pod-service-account-7078ac32-62e6-4c91-b3e3-3e766ecd1c77 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:11:03.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1038" for this suite.

• [SLOW TEST:5.293 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":275,"completed":56,"skipped":787,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:11:03.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul  1 11:11:03.595: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d72cc6cd-9058-4066-8533-2e690661a99c" in namespace "downward-api-8473" to be "Succeeded or Failed"
Jul  1 11:11:03.843: INFO: Pod "downwardapi-volume-d72cc6cd-9058-4066-8533-2e690661a99c": Phase="Pending", Reason="", readiness=false. Elapsed: 247.550876ms
Jul  1 11:11:05.848: INFO: Pod "downwardapi-volume-d72cc6cd-9058-4066-8533-2e690661a99c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.252840327s
Jul  1 11:11:07.852: INFO: Pod "downwardapi-volume-d72cc6cd-9058-4066-8533-2e690661a99c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.256932745s
STEP: Saw pod success
Jul  1 11:11:07.852: INFO: Pod "downwardapi-volume-d72cc6cd-9058-4066-8533-2e690661a99c" satisfied condition "Succeeded or Failed"
Jul  1 11:11:07.855: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-d72cc6cd-9058-4066-8533-2e690661a99c container client-container: 
STEP: delete the pod
Jul  1 11:11:08.033: INFO: Waiting for pod downwardapi-volume-d72cc6cd-9058-4066-8533-2e690661a99c to disappear
Jul  1 11:11:08.202: INFO: Pod downwardapi-volume-d72cc6cd-9058-4066-8533-2e690661a99c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:11:08.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8473" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":822,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:11:08.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Jul  1 11:11:08.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:11:23.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5706" for this suite.

• [SLOW TEST:14.855 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":58,"skipped":829,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:11:23.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 11:11:23.527: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:11:24.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1470" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":275,"completed":59,"skipped":833,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:11:24.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:11:42.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9948" for this suite.

• [SLOW TEST:18.226 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":60,"skipped":845,"failed":0}
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:11:42.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-kh52
STEP: Creating a pod to test atomic-volume-subpath
Jul  1 11:11:42.928: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kh52" in namespace "subpath-6089" to be "Succeeded or Failed"
Jul  1 11:11:42.938: INFO: Pod "pod-subpath-test-configmap-kh52": Phase="Pending", Reason="", readiness=false. Elapsed: 9.838063ms
Jul  1 11:11:44.942: INFO: Pod "pod-subpath-test-configmap-kh52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013735914s
Jul  1 11:11:46.946: INFO: Pod "pod-subpath-test-configmap-kh52": Phase="Running", Reason="", readiness=true. Elapsed: 4.01791045s
Jul  1 11:11:48.951: INFO: Pod "pod-subpath-test-configmap-kh52": Phase="Running", Reason="", readiness=true. Elapsed: 6.022485207s
Jul  1 11:11:50.956: INFO: Pod "pod-subpath-test-configmap-kh52": Phase="Running", Reason="", readiness=true. Elapsed: 8.027826292s
Jul  1 11:11:52.960: INFO: Pod "pod-subpath-test-configmap-kh52": Phase="Running", Reason="", readiness=true. Elapsed: 10.032063366s
Jul  1 11:11:54.965: INFO: Pod "pod-subpath-test-configmap-kh52": Phase="Running", Reason="", readiness=true. Elapsed: 12.036977318s
Jul  1 11:11:56.973: INFO: Pod "pod-subpath-test-configmap-kh52": Phase="Running", Reason="", readiness=true. Elapsed: 14.04466265s
Jul  1 11:11:58.978: INFO: Pod "pod-subpath-test-configmap-kh52": Phase="Running", Reason="", readiness=true. Elapsed: 16.049437635s
Jul  1 11:12:00.982: INFO: Pod "pod-subpath-test-configmap-kh52": Phase="Running", Reason="", readiness=true. Elapsed: 18.054009818s
Jul  1 11:12:02.987: INFO: Pod "pod-subpath-test-configmap-kh52": Phase="Running", Reason="", readiness=true. Elapsed: 20.058729361s
Jul  1 11:12:04.992: INFO: Pod "pod-subpath-test-configmap-kh52": Phase="Running", Reason="", readiness=true. Elapsed: 22.063791183s
Jul  1 11:12:06.997: INFO: Pod "pod-subpath-test-configmap-kh52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.068431517s
STEP: Saw pod success
Jul  1 11:12:06.997: INFO: Pod "pod-subpath-test-configmap-kh52" satisfied condition "Succeeded or Failed"
Jul  1 11:12:07.000: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-kh52 container test-container-subpath-configmap-kh52: 
STEP: delete the pod
Jul  1 11:12:07.120: INFO: Waiting for pod pod-subpath-test-configmap-kh52 to disappear
Jul  1 11:12:07.125: INFO: Pod pod-subpath-test-configmap-kh52 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-kh52
Jul  1 11:12:07.125: INFO: Deleting pod "pod-subpath-test-configmap-kh52" in namespace "subpath-6089"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:12:07.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6089" for this suite.

• [SLOW TEST:24.341 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":61,"skipped":848,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:12:07.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jul  1 11:12:13.256: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1169 PodName:pod-sharedvolume-01ccde5c-507f-4d94-8807-90d5066de24c ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 11:12:13.256: INFO: >>> kubeConfig: /root/.kube/config
I0701 11:12:13.280580       7 log.go:172] (0xc004fc66e0) (0xc0016d6c80) Create stream
I0701 11:12:13.280611       7 log.go:172] (0xc004fc66e0) (0xc0016d6c80) Stream added, broadcasting: 1
I0701 11:12:13.282606       7 log.go:172] (0xc004fc66e0) Reply frame received for 1
I0701 11:12:13.282633       7 log.go:172] (0xc004fc66e0) (0xc0023480a0) Create stream
I0701 11:12:13.282650       7 log.go:172] (0xc004fc66e0) (0xc0023480a0) Stream added, broadcasting: 3
I0701 11:12:13.283266       7 log.go:172] (0xc004fc66e0) Reply frame received for 3
I0701 11:12:13.283293       7 log.go:172] (0xc004fc66e0) (0xc0011235e0) Create stream
I0701 11:12:13.283311       7 log.go:172] (0xc004fc66e0) (0xc0011235e0) Stream added, broadcasting: 5
I0701 11:12:13.284064       7 log.go:172] (0xc004fc66e0) Reply frame received for 5
I0701 11:12:13.339434       7 log.go:172] (0xc004fc66e0) Data frame received for 5
I0701 11:12:13.339466       7 log.go:172] (0xc0011235e0) (5) Data frame handling
I0701 11:12:13.339482       7 log.go:172] (0xc004fc66e0) Data frame received for 3
I0701 11:12:13.339487       7 log.go:172] (0xc0023480a0) (3) Data frame handling
I0701 11:12:13.339493       7 log.go:172] (0xc0023480a0) (3) Data frame sent
I0701 11:12:13.339498       7 log.go:172] (0xc004fc66e0) Data frame received for 3
I0701 11:12:13.339509       7 log.go:172] (0xc0023480a0) (3) Data frame handling
I0701 11:12:13.340584       7 log.go:172] (0xc004fc66e0) Data frame received for 1
I0701 11:12:13.340608       7 log.go:172] (0xc0016d6c80) (1) Data frame handling
I0701 11:12:13.340632       7 log.go:172] (0xc0016d6c80) (1) Data frame sent
I0701 11:12:13.340679       7 log.go:172] (0xc004fc66e0) (0xc0016d6c80) Stream removed, broadcasting: 1
I0701 11:12:13.340706       7 log.go:172] (0xc004fc66e0) Go away received
I0701 11:12:13.340887       7 log.go:172] (0xc004fc66e0) (0xc0016d6c80) Stream removed, broadcasting: 1
I0701 11:12:13.340916       7 log.go:172] (0xc004fc66e0) (0xc0023480a0) Stream removed, broadcasting: 3
I0701 11:12:13.340949       7 log.go:172] (0xc004fc66e0) (0xc0011235e0) Stream removed, broadcasting: 5
Jul  1 11:12:13.340: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:12:13.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1169" for this suite.

• [SLOW TEST:6.193 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":62,"skipped":865,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:12:13.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul  1 11:12:13.810: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:12:13.835: INFO: Number of nodes with available pods: 0
Jul  1 11:12:13.835: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:12:14.838: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:12:14.841: INFO: Number of nodes with available pods: 0
Jul  1 11:12:14.841: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:12:16.015: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:12:16.020: INFO: Number of nodes with available pods: 0
Jul  1 11:12:16.020: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:12:16.993: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:12:16.996: INFO: Number of nodes with available pods: 0
Jul  1 11:12:16.996: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:12:17.840: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:12:17.843: INFO: Number of nodes with available pods: 1
Jul  1 11:12:17.843: INFO: Node kali-worker2 is running more than one daemon pod
Jul  1 11:12:18.862: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:12:18.865: INFO: Number of nodes with available pods: 2
Jul  1 11:12:18.866: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jul  1 11:12:18.906: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:12:18.919: INFO: Number of nodes with available pods: 2
Jul  1 11:12:18.919: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5317, will wait for the garbage collector to delete the pods
Jul  1 11:12:20.009: INFO: Deleting DaemonSet.extensions daemon-set took: 6.596912ms
Jul  1 11:12:20.409: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.239014ms
Jul  1 11:12:33.428: INFO: Number of nodes with available pods: 0
Jul  1 11:12:33.428: INFO: Number of running nodes: 0, number of available pods: 0
Jul  1 11:12:33.430: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5317/daemonsets","resourceVersion":"16788572"},"items":null}

Jul  1 11:12:33.432: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5317/pods","resourceVersion":"16788572"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:12:33.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5317" for this suite.

• [SLOW TEST:20.106 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":63,"skipped":891,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:12:33.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Jul  1 11:12:33.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8175'
Jul  1 11:12:33.867: INFO: stderr: ""
Jul  1 11:12:33.867: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jul  1 11:12:34.904: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  1 11:12:34.904: INFO: Found 0 / 1
Jul  1 11:12:35.972: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  1 11:12:35.972: INFO: Found 0 / 1
Jul  1 11:12:36.871: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  1 11:12:36.871: INFO: Found 0 / 1
Jul  1 11:12:38.354: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  1 11:12:38.354: INFO: Found 1 / 1
Jul  1 11:12:38.354: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jul  1 11:12:38.357: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  1 11:12:38.357: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul  1 11:12:38.357: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config patch pod agnhost-master-2kt6w --namespace=kubectl-8175 -p {"metadata":{"annotations":{"x":"y"}}}'
Jul  1 11:12:39.098: INFO: stderr: ""
Jul  1 11:12:39.098: INFO: stdout: "pod/agnhost-master-2kt6w patched\n"
STEP: checking annotations
Jul  1 11:12:39.215: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  1 11:12:39.215: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:12:39.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8175" for this suite.

• [SLOW TEST:5.768 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":275,"completed":64,"skipped":910,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:12:39.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  1 11:12:39.453: INFO: Waiting up to 5m0s for pod "pod-d51782a0-75b8-46ca-ac5d-0d09b916c196" in namespace "emptydir-8984" to be "Succeeded or Failed"
Jul  1 11:12:39.522: INFO: Pod "pod-d51782a0-75b8-46ca-ac5d-0d09b916c196": Phase="Pending", Reason="", readiness=false. Elapsed: 68.927155ms
Jul  1 11:12:41.652: INFO: Pod "pod-d51782a0-75b8-46ca-ac5d-0d09b916c196": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198815154s
Jul  1 11:12:43.656: INFO: Pod "pod-d51782a0-75b8-46ca-ac5d-0d09b916c196": Phase="Pending", Reason="", readiness=false. Elapsed: 4.203228279s
Jul  1 11:12:45.724: INFO: Pod "pod-d51782a0-75b8-46ca-ac5d-0d09b916c196": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.270679004s
STEP: Saw pod success
Jul  1 11:12:45.724: INFO: Pod "pod-d51782a0-75b8-46ca-ac5d-0d09b916c196" satisfied condition "Succeeded or Failed"
Jul  1 11:12:45.727: INFO: Trying to get logs from node kali-worker pod pod-d51782a0-75b8-46ca-ac5d-0d09b916c196 container test-container: 
STEP: delete the pod
Jul  1 11:12:45.759: INFO: Waiting for pod pod-d51782a0-75b8-46ca-ac5d-0d09b916c196 to disappear
Jul  1 11:12:45.772: INFO: Pod pod-d51782a0-75b8-46ca-ac5d-0d09b916c196 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:12:45.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8984" for this suite.

• [SLOW TEST:6.555 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":65,"skipped":949,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:12:45.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jul  1 11:12:49.878: INFO: &Pod{ObjectMeta:{send-events-46199e8a-a7f2-4760-b227-468aa11165d6  events-2375 /api/v1/namespaces/events-2375/pods/send-events-46199e8a-a7f2-4760-b227-468aa11165d6 ea376c83-099a-4983-b2d0-d2738e43ca4c 16788698 0 2020-07-01 11:12:45 +0000 UTC   map[name:foo time:824081407] map[] [] []  [{e2e.test Update v1 2020-07-01 11:12:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 11:12:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 49 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wn4v5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wn4v5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wn4v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 11:12:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 11:12:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 11:12:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 11:12:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.113,StartTime:2020-07-01 11:12:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 11:12:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://25fda91f294cdeb10bc02a7ffe9834873bb71e596df24534a48775585a6157bd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.113,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Jul  1 11:12:51.882: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jul  1 11:12:53.887: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:12:53.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2375" for this suite.

• [SLOW TEST:8.163 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":275,"completed":66,"skipped":968,"failed":0}
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:12:53.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on node default medium
Jul  1 11:12:54.037: INFO: Waiting up to 5m0s for pod "pod-469c4b72-30b7-413f-9e24-61b0c4eafb29" in namespace "emptydir-508" to be "Succeeded or Failed"
Jul  1 11:12:54.089: INFO: Pod "pod-469c4b72-30b7-413f-9e24-61b0c4eafb29": Phase="Pending", Reason="", readiness=false. Elapsed: 52.490775ms
Jul  1 11:12:56.093: INFO: Pod "pod-469c4b72-30b7-413f-9e24-61b0c4eafb29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056358288s
Jul  1 11:12:58.119: INFO: Pod "pod-469c4b72-30b7-413f-9e24-61b0c4eafb29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082069024s
STEP: Saw pod success
Jul  1 11:12:58.119: INFO: Pod "pod-469c4b72-30b7-413f-9e24-61b0c4eafb29" satisfied condition "Succeeded or Failed"
Jul  1 11:12:58.121: INFO: Trying to get logs from node kali-worker2 pod pod-469c4b72-30b7-413f-9e24-61b0c4eafb29 container test-container: 
STEP: delete the pod
Jul  1 11:12:58.188: INFO: Waiting for pod pod-469c4b72-30b7-413f-9e24-61b0c4eafb29 to disappear
Jul  1 11:12:58.201: INFO: Pod pod-469c4b72-30b7-413f-9e24-61b0c4eafb29 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:12:58.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-508" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":968,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:12:58.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting the proxy server
Jul  1 11:12:58.290: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:12:58.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8839" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":275,"completed":68,"skipped":982,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:12:58.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test env composition
Jul  1 11:12:58.554: INFO: Waiting up to 5m0s for pod "var-expansion-1f1c24de-1881-4f2c-917c-ffb455da3f67" in namespace "var-expansion-6409" to be "Succeeded or Failed"
Jul  1 11:12:58.561: INFO: Pod "var-expansion-1f1c24de-1881-4f2c-917c-ffb455da3f67": Phase="Pending", Reason="", readiness=false. Elapsed: 7.304275ms
Jul  1 11:13:00.598: INFO: Pod "var-expansion-1f1c24de-1881-4f2c-917c-ffb455da3f67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044121838s
Jul  1 11:13:02.603: INFO: Pod "var-expansion-1f1c24de-1881-4f2c-917c-ffb455da3f67": Phase="Running", Reason="", readiness=true. Elapsed: 4.048628971s
Jul  1 11:13:04.607: INFO: Pod "var-expansion-1f1c24de-1881-4f2c-917c-ffb455da3f67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052965218s
STEP: Saw pod success
Jul  1 11:13:04.607: INFO: Pod "var-expansion-1f1c24de-1881-4f2c-917c-ffb455da3f67" satisfied condition "Succeeded or Failed"
Jul  1 11:13:04.610: INFO: Trying to get logs from node kali-worker pod var-expansion-1f1c24de-1881-4f2c-917c-ffb455da3f67 container dapi-container: 
STEP: delete the pod
Jul  1 11:13:04.683: INFO: Waiting for pod var-expansion-1f1c24de-1881-4f2c-917c-ffb455da3f67 to disappear
Jul  1 11:13:04.689: INFO: Pod var-expansion-1f1c24de-1881-4f2c-917c-ffb455da3f67 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:13:04.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6409" for this suite.

• [SLOW TEST:6.298 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1004,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:13:04.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul  1 11:13:09.356: INFO: Successfully updated pod "pod-update-3e381940-f54f-42a6-9d1c-8230cbd63a98"
STEP: verifying the updated pod is in kubernetes
Jul  1 11:13:09.381: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:13:09.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2582" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1084,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:13:09.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-50fe32c2-92d6-48ec-bb9c-be4bed7f8b9c in namespace container-probe-7937
Jul  1 11:13:15.595: INFO: Started pod liveness-50fe32c2-92d6-48ec-bb9c-be4bed7f8b9c in namespace container-probe-7937
STEP: checking the pod's current state and verifying that restartCount is present
Jul  1 11:13:15.599: INFO: Initial restart count of pod liveness-50fe32c2-92d6-48ec-bb9c-be4bed7f8b9c is 0
Jul  1 11:13:31.712: INFO: Restart count of pod container-probe-7937/liveness-50fe32c2-92d6-48ec-bb9c-be4bed7f8b9c is now 1 (16.112903667s elapsed)
Jul  1 11:13:51.892: INFO: Restart count of pod container-probe-7937/liveness-50fe32c2-92d6-48ec-bb9c-be4bed7f8b9c is now 2 (36.293788943s elapsed)
Jul  1 11:14:11.937: INFO: Restart count of pod container-probe-7937/liveness-50fe32c2-92d6-48ec-bb9c-be4bed7f8b9c is now 3 (56.338234227s elapsed)
Jul  1 11:14:30.395: INFO: Restart count of pod container-probe-7937/liveness-50fe32c2-92d6-48ec-bb9c-be4bed7f8b9c is now 4 (1m14.796312266s elapsed)
Jul  1 11:15:40.743: INFO: Restart count of pod container-probe-7937/liveness-50fe32c2-92d6-48ec-bb9c-be4bed7f8b9c is now 5 (2m25.144436089s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:15:40.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7937" for this suite.

• [SLOW TEST:151.404 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":71,"skipped":1101,"failed":0}
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:15:40.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 11:15:40.856: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:15:42.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6109" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":275,"completed":72,"skipped":1101,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:15:42.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul  1 11:15:47.374: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:15:47.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5209" for this suite.

• [SLOW TEST:5.201 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":73,"skipped":1207,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:15:47.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 11:15:47.716: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"047071d3-80b2-4b15-8ebb-0123c00ab11a", Controller:(*bool)(0xc00255d942), BlockOwnerDeletion:(*bool)(0xc00255d943)}}
Jul  1 11:15:47.774: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"82a3b1ba-724d-47e9-96d8-c41f77c37afc", Controller:(*bool)(0xc0042b658a), BlockOwnerDeletion:(*bool)(0xc0042b658b)}}
Jul  1 11:15:47.790: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"eb3ae9e1-27fe-4d14-866b-ac98f4f63d71", Controller:(*bool)(0xc005c30a42), BlockOwnerDeletion:(*bool)(0xc005c30a43)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:15:52.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6486" for this suite.

• [SLOW TEST:5.290 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":74,"skipped":1269,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:15:52.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jul  1 11:15:52.941: INFO: Waiting up to 5m0s for pod "downward-api-a1ad5ab8-c4d3-4861-8d7f-2513c6c088fa" in namespace "downward-api-4001" to be "Succeeded or Failed"
Jul  1 11:15:52.997: INFO: Pod "downward-api-a1ad5ab8-c4d3-4861-8d7f-2513c6c088fa": Phase="Pending", Reason="", readiness=false. Elapsed: 56.418064ms
Jul  1 11:15:55.091: INFO: Pod "downward-api-a1ad5ab8-c4d3-4861-8d7f-2513c6c088fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149531734s
Jul  1 11:15:57.095: INFO: Pod "downward-api-a1ad5ab8-c4d3-4861-8d7f-2513c6c088fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153763375s
Jul  1 11:15:59.114: INFO: Pod "downward-api-a1ad5ab8-c4d3-4861-8d7f-2513c6c088fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.1727483s
STEP: Saw pod success
Jul  1 11:15:59.114: INFO: Pod "downward-api-a1ad5ab8-c4d3-4861-8d7f-2513c6c088fa" satisfied condition "Succeeded or Failed"
Jul  1 11:15:59.117: INFO: Trying to get logs from node kali-worker pod downward-api-a1ad5ab8-c4d3-4861-8d7f-2513c6c088fa container dapi-container: 
STEP: delete the pod
Jul  1 11:15:59.225: INFO: Waiting for pod downward-api-a1ad5ab8-c4d3-4861-8d7f-2513c6c088fa to disappear
Jul  1 11:15:59.239: INFO: Pod downward-api-a1ad5ab8-c4d3-4861-8d7f-2513c6c088fa no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:15:59.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4001" for this suite.

• [SLOW TEST:6.408 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1284,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:15:59.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override command
Jul  1 11:15:59.384: INFO: Waiting up to 5m0s for pod "client-containers-eba2e5b6-7b07-4bfe-a37d-8521c1fbf5c7" in namespace "containers-5340" to be "Succeeded or Failed"
Jul  1 11:15:59.386: INFO: Pod "client-containers-eba2e5b6-7b07-4bfe-a37d-8521c1fbf5c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.559177ms
Jul  1 11:16:01.426: INFO: Pod "client-containers-eba2e5b6-7b07-4bfe-a37d-8521c1fbf5c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042345001s
Jul  1 11:16:03.678: INFO: Pod "client-containers-eba2e5b6-7b07-4bfe-a37d-8521c1fbf5c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.294153715s
STEP: Saw pod success
Jul  1 11:16:03.678: INFO: Pod "client-containers-eba2e5b6-7b07-4bfe-a37d-8521c1fbf5c7" satisfied condition "Succeeded or Failed"
Jul  1 11:16:03.681: INFO: Trying to get logs from node kali-worker pod client-containers-eba2e5b6-7b07-4bfe-a37d-8521c1fbf5c7 container test-container: 
STEP: delete the pod
Jul  1 11:16:03.770: INFO: Waiting for pod client-containers-eba2e5b6-7b07-4bfe-a37d-8521c1fbf5c7 to disappear
Jul  1 11:16:03.776: INFO: Pod client-containers-eba2e5b6-7b07-4bfe-a37d-8521c1fbf5c7 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:16:03.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5340" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1320,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:16:03.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:16:20.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2560" for this suite.

• [SLOW TEST:16.702 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":77,"skipped":1328,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:16:20.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Starting the proxy
Jul  1 11:16:20.790: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix516537855/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:16:20.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6407" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":275,"completed":78,"skipped":1361,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:16:20.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul  1 11:16:20.955: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52104bd9-7932-4b12-a208-00a7db7f9498" in namespace "projected-413" to be "Succeeded or Failed"
Jul  1 11:16:20.962: INFO: Pod "downwardapi-volume-52104bd9-7932-4b12-a208-00a7db7f9498": Phase="Pending", Reason="", readiness=false. Elapsed: 6.677182ms
Jul  1 11:16:22.966: INFO: Pod "downwardapi-volume-52104bd9-7932-4b12-a208-00a7db7f9498": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010755801s
Jul  1 11:16:25.019: INFO: Pod "downwardapi-volume-52104bd9-7932-4b12-a208-00a7db7f9498": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063712033s
STEP: Saw pod success
Jul  1 11:16:25.019: INFO: Pod "downwardapi-volume-52104bd9-7932-4b12-a208-00a7db7f9498" satisfied condition "Succeeded or Failed"
Jul  1 11:16:25.023: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-52104bd9-7932-4b12-a208-00a7db7f9498 container client-container: 
STEP: delete the pod
Jul  1 11:16:25.089: INFO: Waiting for pod downwardapi-volume-52104bd9-7932-4b12-a208-00a7db7f9498 to disappear
Jul  1 11:16:25.151: INFO: Pod downwardapi-volume-52104bd9-7932-4b12-a208-00a7db7f9498 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:16:25.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-413" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":79,"skipped":1389,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:16:25.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul  1 11:16:33.377: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  1 11:16:33.404: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  1 11:16:35.404: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  1 11:16:35.408: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  1 11:16:37.404: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  1 11:16:37.468: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  1 11:16:39.404: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  1 11:16:39.408: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:16:39.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-419" for this suite.

• [SLOW TEST:14.274 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1409,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:16:39.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-1092
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating statefulset ss in namespace statefulset-1092
Jul  1 11:16:39.671: INFO: Found 0 stateful pods, waiting for 1
Jul  1 11:16:49.676: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul  1 11:16:49.722: INFO: Deleting all statefulset in ns statefulset-1092
Jul  1 11:16:49.741: INFO: Scaling statefulset ss to 0
Jul  1 11:17:09.865: INFO: Waiting for statefulset status.replicas updated to 0
Jul  1 11:17:09.868: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:17:09.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1092" for this suite.

• [SLOW TEST:30.480 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":81,"skipped":1436,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:17:09.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  1 11:17:10.599: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  1 11:17:12.968: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729199030, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729199030, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729199030, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729199030, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  1 11:17:16.147: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:17:16.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6383" for this suite.
STEP: Destroying namespace "webhook-6383-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.407 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":82,"skipped":1444,"failed":0}
SSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:17:17.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 11:17:18.212: INFO: The status of Pod test-webserver-e8e2d743-cba0-491b-a51a-c2f03fd580d0 is Pending, waiting for it to be Running (with Ready = true)
Jul  1 11:17:20.277: INFO: The status of Pod test-webserver-e8e2d743-cba0-491b-a51a-c2f03fd580d0 is Pending, waiting for it to be Running (with Ready = true)
Jul  1 11:17:22.216: INFO: The status of Pod test-webserver-e8e2d743-cba0-491b-a51a-c2f03fd580d0 is Pending, waiting for it to be Running (with Ready = true)
Jul  1 11:17:24.223: INFO: The status of Pod test-webserver-e8e2d743-cba0-491b-a51a-c2f03fd580d0 is Running (Ready = false)
Jul  1 11:17:26.265: INFO: The status of Pod test-webserver-e8e2d743-cba0-491b-a51a-c2f03fd580d0 is Running (Ready = false)
Jul  1 11:17:28.242: INFO: The status of Pod test-webserver-e8e2d743-cba0-491b-a51a-c2f03fd580d0 is Running (Ready = false)
Jul  1 11:17:30.230: INFO: The status of Pod test-webserver-e8e2d743-cba0-491b-a51a-c2f03fd580d0 is Running (Ready = false)
Jul  1 11:17:32.237: INFO: The status of Pod test-webserver-e8e2d743-cba0-491b-a51a-c2f03fd580d0 is Running (Ready = false)
Jul  1 11:17:34.216: INFO: The status of Pod test-webserver-e8e2d743-cba0-491b-a51a-c2f03fd580d0 is Running (Ready = false)
Jul  1 11:17:36.216: INFO: The status of Pod test-webserver-e8e2d743-cba0-491b-a51a-c2f03fd580d0 is Running (Ready = false)
Jul  1 11:17:38.216: INFO: The status of Pod test-webserver-e8e2d743-cba0-491b-a51a-c2f03fd580d0 is Running (Ready = false)
Jul  1 11:17:40.216: INFO: The status of Pod test-webserver-e8e2d743-cba0-491b-a51a-c2f03fd580d0 is Running (Ready = false)
Jul  1 11:17:42.217: INFO: The status of Pod test-webserver-e8e2d743-cba0-491b-a51a-c2f03fd580d0 is Running (Ready = false)
Jul  1 11:17:44.217: INFO: The status of Pod test-webserver-e8e2d743-cba0-491b-a51a-c2f03fd580d0 is Running (Ready = true)
Jul  1 11:17:44.220: INFO: Container started at 2020-07-01 11:17:21 +0000 UTC, pod became ready at 2020-07-01 11:17:42 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:17:44.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9683" for this suite.

• [SLOW TEST:26.906 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1447,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:17:44.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-956
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-956
STEP: Creating statefulset with conflicting port in namespace statefulset-956
STEP: Waiting until pod test-pod will start running in namespace statefulset-956
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-956
Jul  1 11:17:50.483: INFO: Observed stateful pod in namespace: statefulset-956, name: ss-0, uid: 15c7a5b1-ed79-4efe-a94f-798e06d80bbf, status phase: Failed. Waiting for statefulset controller to delete.
Jul  1 11:17:50.632: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-956
STEP: Removing pod with conflicting port in namespace statefulset-956
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-956 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul  1 11:17:54.787: INFO: Deleting all statefulset in ns statefulset-956
Jul  1 11:17:54.789: INFO: Scaling statefulset ss to 0
Jul  1 11:18:04.803: INFO: Waiting for statefulset status.replicas updated to 0
Jul  1 11:18:04.806: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:18:04.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-956" for this suite.

• [SLOW TEST:20.674 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":84,"skipped":1457,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:18:04.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jul  1 11:18:05.062: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1460 /api/v1/namespaces/watch-1460/configmaps/e2e-watch-test-watch-closed e781f2a4-338c-4cbc-a098-14da6a31300f 16790358 0 2020-07-01 11:18:05 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-07-01 11:18:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jul  1 11:18:05.062: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1460 /api/v1/namespaces/watch-1460/configmaps/e2e-watch-test-watch-closed e781f2a4-338c-4cbc-a098-14da6a31300f 16790359 0 2020-07-01 11:18:05 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-07-01 11:18:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jul  1 11:18:05.079: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1460 /api/v1/namespaces/watch-1460/configmaps/e2e-watch-test-watch-closed e781f2a4-338c-4cbc-a098-14da6a31300f 16790360 0 2020-07-01 11:18:05 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-07-01 11:18:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul  1 11:18:05.079: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1460 /api/v1/namespaces/watch-1460/configmaps/e2e-watch-test-watch-closed e781f2a4-338c-4cbc-a098-14da6a31300f 16790362 0 2020-07-01 11:18:05 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-07-01 11:18:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:18:05.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1460" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":85,"skipped":1495,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:18:05.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating cluster-info
Jul  1 11:18:05.200: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config cluster-info'
Jul  1 11:18:05.303: INFO: stderr: ""
Jul  1 11:18:05.303: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:18:05.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-574" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":275,"completed":86,"skipped":1505,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:18:05.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:18:39.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7699" for this suite.

• [SLOW TEST:34.205 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1517,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:18:39.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-9695
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul  1 11:18:39.790: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jul  1 11:18:39.900: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul  1 11:18:41.979: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul  1 11:18:43.905: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 11:18:45.905: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 11:18:47.905: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 11:18:49.905: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 11:18:51.905: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 11:18:53.913: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 11:18:55.905: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 11:18:57.906: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jul  1 11:18:57.912: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jul  1 11:19:03.938: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.129:8080/dial?request=hostname&protocol=http&host=10.244.2.128&port=8080&tries=1'] Namespace:pod-network-test-9695 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 11:19:03.938: INFO: >>> kubeConfig: /root/.kube/config
I0701 11:19:03.977823       7 log.go:172] (0xc002eca160) (0xc0011a1220) Create stream
I0701 11:19:03.977866       7 log.go:172] (0xc002eca160) (0xc0011a1220) Stream added, broadcasting: 1
I0701 11:19:03.979820       7 log.go:172] (0xc002eca160) Reply frame received for 1
I0701 11:19:03.979857       7 log.go:172] (0xc002eca160) (0xc001638460) Create stream
I0701 11:19:03.979870       7 log.go:172] (0xc002eca160) (0xc001638460) Stream added, broadcasting: 3
I0701 11:19:03.980807       7 log.go:172] (0xc002eca160) Reply frame received for 3
I0701 11:19:03.980848       7 log.go:172] (0xc002eca160) (0xc000d4c5a0) Create stream
I0701 11:19:03.980865       7 log.go:172] (0xc002eca160) (0xc000d4c5a0) Stream added, broadcasting: 5
I0701 11:19:03.982135       7 log.go:172] (0xc002eca160) Reply frame received for 5
I0701 11:19:04.073021       7 log.go:172] (0xc002eca160) Data frame received for 3
I0701 11:19:04.073104       7 log.go:172] (0xc001638460) (3) Data frame handling
I0701 11:19:04.073312       7 log.go:172] (0xc001638460) (3) Data frame sent
I0701 11:19:04.073489       7 log.go:172] (0xc002eca160) Data frame received for 5
I0701 11:19:04.073555       7 log.go:172] (0xc000d4c5a0) (5) Data frame handling
I0701 11:19:04.073738       7 log.go:172] (0xc002eca160) Data frame received for 3
I0701 11:19:04.073752       7 log.go:172] (0xc001638460) (3) Data frame handling
I0701 11:19:04.075401       7 log.go:172] (0xc002eca160) Data frame received for 1
I0701 11:19:04.075420       7 log.go:172] (0xc0011a1220) (1) Data frame handling
I0701 11:19:04.075428       7 log.go:172] (0xc0011a1220) (1) Data frame sent
I0701 11:19:04.075440       7 log.go:172] (0xc002eca160) (0xc0011a1220) Stream removed, broadcasting: 1
I0701 11:19:04.075495       7 log.go:172] (0xc002eca160) Go away received
I0701 11:19:04.075575       7 log.go:172] (0xc002eca160) (0xc0011a1220) Stream removed, broadcasting: 1
I0701 11:19:04.075595       7 log.go:172] (0xc002eca160) (0xc001638460) Stream removed, broadcasting: 3
I0701 11:19:04.075613       7 log.go:172] (0xc002eca160) (0xc000d4c5a0) Stream removed, broadcasting: 5
Jul  1 11:19:04.075: INFO: Waiting for responses: map[]
Jul  1 11:19:04.079: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.129:8080/dial?request=hostname&protocol=http&host=10.244.1.133&port=8080&tries=1'] Namespace:pod-network-test-9695 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 11:19:04.079: INFO: >>> kubeConfig: /root/.kube/config
I0701 11:19:04.111969       7 log.go:172] (0xc0027ca580) (0xc000d4cd20) Create stream
I0701 11:19:04.112003       7 log.go:172] (0xc0027ca580) (0xc000d4cd20) Stream added, broadcasting: 1
I0701 11:19:04.114229       7 log.go:172] (0xc0027ca580) Reply frame received for 1
I0701 11:19:04.114294       7 log.go:172] (0xc0027ca580) (0xc0016386e0) Create stream
I0701 11:19:04.114317       7 log.go:172] (0xc0027ca580) (0xc0016386e0) Stream added, broadcasting: 3
I0701 11:19:04.115575       7 log.go:172] (0xc0027ca580) Reply frame received for 3
I0701 11:19:04.115631       7 log.go:172] (0xc0027ca580) (0xc0016c0320) Create stream
I0701 11:19:04.115660       7 log.go:172] (0xc0027ca580) (0xc0016c0320) Stream added, broadcasting: 5
I0701 11:19:04.119122       7 log.go:172] (0xc0027ca580) Reply frame received for 5
I0701 11:19:04.195243       7 log.go:172] (0xc0027ca580) Data frame received for 3
I0701 11:19:04.195271       7 log.go:172] (0xc0016386e0) (3) Data frame handling
I0701 11:19:04.195291       7 log.go:172] (0xc0016386e0) (3) Data frame sent
I0701 11:19:04.196014       7 log.go:172] (0xc0027ca580) Data frame received for 5
I0701 11:19:04.196058       7 log.go:172] (0xc0016c0320) (5) Data frame handling
I0701 11:19:04.196091       7 log.go:172] (0xc0027ca580) Data frame received for 3
I0701 11:19:04.196115       7 log.go:172] (0xc0016386e0) (3) Data frame handling
I0701 11:19:04.198025       7 log.go:172] (0xc0027ca580) Data frame received for 1
I0701 11:19:04.198087       7 log.go:172] (0xc000d4cd20) (1) Data frame handling
I0701 11:19:04.198122       7 log.go:172] (0xc000d4cd20) (1) Data frame sent
I0701 11:19:04.198144       7 log.go:172] (0xc0027ca580) (0xc000d4cd20) Stream removed, broadcasting: 1
I0701 11:19:04.198170       7 log.go:172] (0xc0027ca580) Go away received
I0701 11:19:04.198298       7 log.go:172] (0xc0027ca580) (0xc000d4cd20) Stream removed, broadcasting: 1
I0701 11:19:04.198333       7 log.go:172] (0xc0027ca580) (0xc0016386e0) Stream removed, broadcasting: 3
I0701 11:19:04.198354       7 log.go:172] (0xc0027ca580) (0xc0016c0320) Stream removed, broadcasting: 5
Jul  1 11:19:04.198: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:19:04.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9695" for this suite.

• [SLOW TEST:24.691 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1532,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:19:04.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-d0ee4120-7bb5-45e2-b909-5f9303f38ee3
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-d0ee4120-7bb5-45e2-b909-5f9303f38ee3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:19:10.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1696" for this suite.

• [SLOW TEST:6.198 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1551,"failed":0}
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:19:10.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9944
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Jul  1 11:19:11.100: INFO: Found 0 stateful pods, waiting for 3
Jul  1 11:19:21.105: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:19:21.105: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:19:21.105: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul  1 11:19:31.104: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:19:31.104: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:19:31.104: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:19:31.113: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9944 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  1 11:19:37.664: INFO: stderr: "I0701 11:19:37.464085     830 log.go:172] (0xc0008589a0) (0xc000591680) Create stream\nI0701 11:19:37.464117     830 log.go:172] (0xc0008589a0) (0xc000591680) Stream added, broadcasting: 1\nI0701 11:19:37.466386     830 log.go:172] (0xc0008589a0) Reply frame received for 1\nI0701 11:19:37.466421     830 log.go:172] (0xc0008589a0) (0xc000591720) Create stream\nI0701 11:19:37.466435     830 log.go:172] (0xc0008589a0) (0xc000591720) Stream added, broadcasting: 3\nI0701 11:19:37.467091     830 log.go:172] (0xc0008589a0) Reply frame received for 3\nI0701 11:19:37.467122     830 log.go:172] (0xc0008589a0) (0xc000732000) Create stream\nI0701 11:19:37.467132     830 log.go:172] (0xc0008589a0) (0xc000732000) Stream added, broadcasting: 5\nI0701 11:19:37.467761     830 log.go:172] (0xc0008589a0) Reply frame received for 5\nI0701 11:19:37.606594     830 log.go:172] (0xc0008589a0) Data frame received for 5\nI0701 11:19:37.606620     830 log.go:172] (0xc000732000) (5) Data frame handling\nI0701 11:19:37.606635     830 log.go:172] (0xc000732000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 11:19:37.655156     830 log.go:172] (0xc0008589a0) Data frame received for 3\nI0701 11:19:37.655185     830 log.go:172] (0xc000591720) (3) Data frame handling\nI0701 11:19:37.655202     830 log.go:172] (0xc000591720) (3) Data frame sent\nI0701 11:19:37.655210     830 log.go:172] (0xc0008589a0) Data frame received for 3\nI0701 11:19:37.655216     830 log.go:172] (0xc000591720) (3) Data frame handling\nI0701 11:19:37.655443     830 log.go:172] (0xc0008589a0) Data frame received for 5\nI0701 11:19:37.655478     830 log.go:172] (0xc000732000) (5) Data frame handling\nI0701 11:19:37.657983     830 log.go:172] (0xc0008589a0) Data frame received for 1\nI0701 11:19:37.658021     830 log.go:172] (0xc000591680) (1) Data frame handling\nI0701 11:19:37.658043     830 log.go:172] (0xc000591680) (1) Data frame sent\nI0701 11:19:37.658066     830 log.go:172] (0xc0008589a0) (0xc000591680) Stream removed, broadcasting: 1\nI0701 11:19:37.658110     830 log.go:172] (0xc0008589a0) Go away received\nI0701 11:19:37.658615     830 log.go:172] (0xc0008589a0) (0xc000591680) Stream removed, broadcasting: 1\nI0701 11:19:37.658646     830 log.go:172] (0xc0008589a0) (0xc000591720) Stream removed, broadcasting: 3\nI0701 11:19:37.658658     830 log.go:172] (0xc0008589a0) (0xc000732000) Stream removed, broadcasting: 5\n"
Jul  1 11:19:37.664: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  1 11:19:37.664: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jul  1 11:19:47.695: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jul  1 11:19:57.772: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9944 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 11:19:57.990: INFO: stderr: "I0701 11:19:57.904773     855 log.go:172] (0xc0008b6630) (0xc0005bd400) Create stream\nI0701 11:19:57.904831     855 log.go:172] (0xc0008b6630) (0xc0005bd400) Stream added, broadcasting: 1\nI0701 11:19:57.907053     855 log.go:172] (0xc0008b6630) Reply frame received for 1\nI0701 11:19:57.907099     855 log.go:172] (0xc0008b6630) (0xc0008b2000) Create stream\nI0701 11:19:57.907117     855 log.go:172] (0xc0008b6630) (0xc0008b2000) Stream added, broadcasting: 3\nI0701 11:19:57.908292     855 log.go:172] (0xc0008b6630) Reply frame received for 3\nI0701 11:19:57.908356     855 log.go:172] (0xc0008b6630) (0xc0007d0000) Create stream\nI0701 11:19:57.908383     855 log.go:172] (0xc0008b6630) (0xc0007d0000) Stream added, broadcasting: 5\nI0701 11:19:57.909613     855 log.go:172] (0xc0008b6630) Reply frame received for 5\nI0701 11:19:57.982971     855 log.go:172] (0xc0008b6630) Data frame received for 5\nI0701 11:19:57.983026     855 log.go:172] (0xc0008b6630) Data frame received for 3\nI0701 11:19:57.983059     855 log.go:172] (0xc0008b2000) (3) Data frame handling\nI0701 11:19:57.983074     855 log.go:172] (0xc0008b2000) (3) Data frame sent\nI0701 11:19:57.983084     855 log.go:172] (0xc0008b6630) Data frame received for 3\nI0701 11:19:57.983092     855 log.go:172] (0xc0008b2000) (3) Data frame handling\nI0701 11:19:57.983151     855 log.go:172] (0xc0007d0000) (5) Data frame handling\nI0701 11:19:57.983188     855 log.go:172] (0xc0007d0000) (5) Data frame sent\nI0701 11:19:57.983206     855 log.go:172] (0xc0008b6630) Data frame received for 5\nI0701 11:19:57.983225     855 log.go:172] (0xc0007d0000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 11:19:57.984598     855 log.go:172] (0xc0008b6630) Data frame received for 1\nI0701 11:19:57.984636     855 log.go:172] (0xc0005bd400) (1) Data frame handling\nI0701 11:19:57.984671     855 log.go:172] (0xc0005bd400) (1) Data frame sent\nI0701 11:19:57.984846     855 log.go:172] (0xc0008b6630) (0xc0005bd400) Stream removed, broadcasting: 1\nI0701 11:19:57.984893     855 log.go:172] (0xc0008b6630) Go away received\nI0701 11:19:57.985829     855 log.go:172] (0xc0008b6630) (0xc0005bd400) Stream removed, broadcasting: 1\nI0701 11:19:57.985856     855 log.go:172] (0xc0008b6630) (0xc0008b2000) Stream removed, broadcasting: 3\nI0701 11:19:57.985868     855 log.go:172] (0xc0008b6630) (0xc0007d0000) Stream removed, broadcasting: 5\n"
Jul  1 11:19:57.990: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  1 11:19:57.990: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  1 11:20:28.033: INFO: Waiting for StatefulSet statefulset-9944/ss2 to complete update
STEP: Rolling back to a previous revision
Jul  1 11:20:38.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9944 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  1 11:20:38.320: INFO: stderr: "I0701 11:20:38.167639     875 log.go:172] (0xc0009e3ce0) (0xc000b12b40) Create stream\nI0701 11:20:38.167684     875 log.go:172] (0xc0009e3ce0) (0xc000b12b40) Stream added, broadcasting: 1\nI0701 11:20:38.171048     875 log.go:172] (0xc0009e3ce0) Reply frame received for 1\nI0701 11:20:38.171105     875 log.go:172] (0xc0009e3ce0) (0xc000b12be0) Create stream\nI0701 11:20:38.171123     875 log.go:172] (0xc0009e3ce0) (0xc000b12be0) Stream added, broadcasting: 3\nI0701 11:20:38.172106     875 log.go:172] (0xc0009e3ce0) Reply frame received for 3\nI0701 11:20:38.172137     875 log.go:172] (0xc0009e3ce0) (0xc000b12c80) Create stream\nI0701 11:20:38.172145     875 log.go:172] (0xc0009e3ce0) (0xc000b12c80) Stream added, broadcasting: 5\nI0701 11:20:38.173001     875 log.go:172] (0xc0009e3ce0) Reply frame received for 5\nI0701 11:20:38.250938     875 log.go:172] (0xc0009e3ce0) Data frame received for 5\nI0701 11:20:38.250969     875 log.go:172] (0xc000b12c80) (5) Data frame handling\nI0701 11:20:38.250987     875 log.go:172] (0xc000b12c80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 11:20:38.314093     875 log.go:172] (0xc0009e3ce0) Data frame received for 5\nI0701 11:20:38.314123     875 log.go:172] (0xc000b12c80) (5) Data frame handling\nI0701 11:20:38.314172     875 log.go:172] (0xc0009e3ce0) Data frame received for 3\nI0701 11:20:38.314226     875 log.go:172] (0xc000b12be0) (3) Data frame handling\nI0701 11:20:38.314261     875 log.go:172] (0xc000b12be0) (3) Data frame sent\nI0701 11:20:38.314283     875 log.go:172] (0xc0009e3ce0) Data frame received for 3\nI0701 11:20:38.314297     875 log.go:172] (0xc000b12be0) (3) Data frame handling\nI0701 11:20:38.315771     875 log.go:172] (0xc0009e3ce0) Data frame received for 1\nI0701 11:20:38.315790     875 log.go:172] (0xc000b12b40) (1) Data frame handling\nI0701 11:20:38.315801     875 log.go:172] (0xc000b12b40) (1) Data frame sent\nI0701 11:20:38.315817     875 log.go:172] (0xc0009e3ce0) (0xc000b12b40) Stream removed, broadcasting: 1\nI0701 11:20:38.315859     875 log.go:172] (0xc0009e3ce0) Go away received\nI0701 11:20:38.316128     875 log.go:172] (0xc0009e3ce0) (0xc000b12b40) Stream removed, broadcasting: 1\nI0701 11:20:38.316145     875 log.go:172] (0xc0009e3ce0) (0xc000b12be0) Stream removed, broadcasting: 3\nI0701 11:20:38.316154     875 log.go:172] (0xc0009e3ce0) (0xc000b12c80) Stream removed, broadcasting: 5\n"
Jul  1 11:20:38.320: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  1 11:20:38.320: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  1 11:20:48.417: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jul  1 11:20:58.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9944 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 11:20:58.770: INFO: stderr: "I0701 11:20:58.671015     892 log.go:172] (0xc0000e8f20) (0xc00091c000) Create stream\nI0701 11:20:58.671074     892 log.go:172] (0xc0000e8f20) (0xc00091c000) Stream added, broadcasting: 1\nI0701 11:20:58.673721     892 log.go:172] (0xc0000e8f20) Reply frame received for 1\nI0701 11:20:58.673757     892 log.go:172] (0xc0000e8f20) (0xc000474000) Create stream\nI0701 11:20:58.673766     892 log.go:172] (0xc0000e8f20) (0xc000474000) Stream added, broadcasting: 3\nI0701 11:20:58.674557     892 log.go:172] (0xc0000e8f20) Reply frame received for 3\nI0701 11:20:58.674587     892 log.go:172] (0xc0000e8f20) (0xc00091c0a0) Create stream\nI0701 11:20:58.674601     892 log.go:172] (0xc0000e8f20) (0xc00091c0a0) Stream added, broadcasting: 5\nI0701 11:20:58.675714     892 log.go:172] (0xc0000e8f20) Reply frame received for 5\nI0701 11:20:58.762396     892 log.go:172] (0xc0000e8f20) Data frame received for 5\nI0701 11:20:58.762427     892 log.go:172] (0xc00091c0a0) (5) Data frame handling\nI0701 11:20:58.762436     892 log.go:172] (0xc00091c0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 11:20:58.762458     892 log.go:172] (0xc0000e8f20) Data frame received for 3\nI0701 11:20:58.762492     892 log.go:172] (0xc000474000) (3) Data frame handling\nI0701 11:20:58.762509     892 log.go:172] (0xc000474000) (3) Data frame sent\nI0701 11:20:58.762525     892 log.go:172] (0xc0000e8f20) Data frame received for 3\nI0701 11:20:58.762540     892 log.go:172] (0xc000474000) (3) Data frame handling\nI0701 11:20:58.762552     892 log.go:172] (0xc0000e8f20) Data frame received for 5\nI0701 11:20:58.762561     892 log.go:172] (0xc00091c0a0) (5) Data frame handling\nI0701 11:20:58.764017     892 log.go:172] (0xc0000e8f20) Data frame received for 1\nI0701 11:20:58.764043     892 log.go:172] (0xc00091c000) (1) Data frame handling\nI0701 11:20:58.764055     892 log.go:172] (0xc00091c000) (1) Data frame sent\nI0701 11:20:58.764069     892 log.go:172] (0xc0000e8f20) (0xc00091c000) Stream removed, broadcasting: 1\nI0701 11:20:58.764117     892 log.go:172] (0xc0000e8f20) Go away received\nI0701 11:20:58.764419     892 log.go:172] (0xc0000e8f20) (0xc00091c000) Stream removed, broadcasting: 1\nI0701 11:20:58.764435     892 log.go:172] (0xc0000e8f20) (0xc000474000) Stream removed, broadcasting: 3\nI0701 11:20:58.764444     892 log.go:172] (0xc0000e8f20) (0xc00091c0a0) Stream removed, broadcasting: 5\n"
Jul  1 11:20:58.771: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  1 11:20:58.771: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  1 11:21:08.793: INFO: Waiting for StatefulSet statefulset-9944/ss2 to complete update
Jul  1 11:21:08.793: INFO: Waiting for Pod statefulset-9944/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul  1 11:21:08.793: INFO: Waiting for Pod statefulset-9944/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul  1 11:21:08.793: INFO: Waiting for Pod statefulset-9944/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul  1 11:21:18.894: INFO: Waiting for StatefulSet statefulset-9944/ss2 to complete update
Jul  1 11:21:18.894: INFO: Waiting for Pod statefulset-9944/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul  1 11:21:18.894: INFO: Waiting for Pod statefulset-9944/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul  1 11:21:28.802: INFO: Waiting for StatefulSet statefulset-9944/ss2 to complete update
Jul  1 11:21:28.802: INFO: Waiting for Pod statefulset-9944/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul  1 11:21:38.802: INFO: Deleting all statefulset in ns statefulset-9944
Jul  1 11:21:38.805: INFO: Scaling statefulset ss2 to 0
Jul  1 11:21:58.920: INFO: Waiting for statefulset status.replicas updated to 0
Jul  1 11:21:58.923: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:21:59.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9944" for this suite.

• [SLOW TEST:168.689 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":90,"skipped":1553,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:21:59.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 11:21:59.349: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-7434b398-edca-4b09-ab82-db939c31b4b0" in namespace "security-context-test-5559" to be "Succeeded or Failed"
Jul  1 11:21:59.426: INFO: Pod "busybox-readonly-false-7434b398-edca-4b09-ab82-db939c31b4b0": Phase="Pending", Reason="", readiness=false. Elapsed: 77.287579ms
Jul  1 11:22:01.430: INFO: Pod "busybox-readonly-false-7434b398-edca-4b09-ab82-db939c31b4b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08128826s
Jul  1 11:22:03.435: INFO: Pod "busybox-readonly-false-7434b398-edca-4b09-ab82-db939c31b4b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085610027s
Jul  1 11:22:03.435: INFO: Pod "busybox-readonly-false-7434b398-edca-4b09-ab82-db939c31b4b0" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:22:03.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5559" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1562,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:22:03.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-976d9e42-cfa3-4cc8-a056-33af340b991b
STEP: Creating secret with name s-test-opt-upd-d7cc82f3-cc58-4b04-a7cb-7bf63ed97592
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-976d9e42-cfa3-4cc8-a056-33af340b991b
STEP: Updating secret s-test-opt-upd-d7cc82f3-cc58-4b04-a7cb-7bf63ed97592
STEP: Creating secret with name s-test-opt-create-380b5464-0872-4312-bfb7-7007feef1bdc
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:22:11.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4959" for this suite.

• [SLOW TEST:8.563 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":92,"skipped":1570,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:22:12.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's args
Jul  1 11:22:12.194: INFO: Waiting up to 5m0s for pod "var-expansion-1f4221cf-5b80-49c3-bf3e-66320ce7ed5a" in namespace "var-expansion-434" to be "Succeeded or Failed"
Jul  1 11:22:12.215: INFO: Pod "var-expansion-1f4221cf-5b80-49c3-bf3e-66320ce7ed5a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.053429ms
Jul  1 11:22:14.396: INFO: Pod "var-expansion-1f4221cf-5b80-49c3-bf3e-66320ce7ed5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201984918s
Jul  1 11:22:16.401: INFO: Pod "var-expansion-1f4221cf-5b80-49c3-bf3e-66320ce7ed5a": Phase="Running", Reason="", readiness=true. Elapsed: 4.206867931s
Jul  1 11:22:18.406: INFO: Pod "var-expansion-1f4221cf-5b80-49c3-bf3e-66320ce7ed5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.211285649s
STEP: Saw pod success
Jul  1 11:22:18.406: INFO: Pod "var-expansion-1f4221cf-5b80-49c3-bf3e-66320ce7ed5a" satisfied condition "Succeeded or Failed"
Jul  1 11:22:18.408: INFO: Trying to get logs from node kali-worker2 pod var-expansion-1f4221cf-5b80-49c3-bf3e-66320ce7ed5a container dapi-container: 
STEP: delete the pod
Jul  1 11:22:18.567: INFO: Waiting for pod var-expansion-1f4221cf-5b80-49c3-bf3e-66320ce7ed5a to disappear
Jul  1 11:22:18.705: INFO: Pod var-expansion-1f4221cf-5b80-49c3-bf3e-66320ce7ed5a no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:22:18.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-434" for this suite.

• [SLOW TEST:6.710 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":93,"skipped":1601,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:22:18.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul  1 11:22:18.788: INFO: Waiting up to 5m0s for pod "downwardapi-volume-39b07635-6b96-47a9-a911-fad6339b7ba5" in namespace "downward-api-8989" to be "Succeeded or Failed"
Jul  1 11:22:18.852: INFO: Pod "downwardapi-volume-39b07635-6b96-47a9-a911-fad6339b7ba5": Phase="Pending", Reason="", readiness=false. Elapsed: 63.88629ms
Jul  1 11:22:21.064: INFO: Pod "downwardapi-volume-39b07635-6b96-47a9-a911-fad6339b7ba5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.275897086s
Jul  1 11:22:23.244: INFO: Pod "downwardapi-volume-39b07635-6b96-47a9-a911-fad6339b7ba5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.456389792s
STEP: Saw pod success
Jul  1 11:22:23.244: INFO: Pod "downwardapi-volume-39b07635-6b96-47a9-a911-fad6339b7ba5" satisfied condition "Succeeded or Failed"
Jul  1 11:22:23.249: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-39b07635-6b96-47a9-a911-fad6339b7ba5 container client-container: 
STEP: delete the pod
Jul  1 11:22:24.299: INFO: Waiting for pod downwardapi-volume-39b07635-6b96-47a9-a911-fad6339b7ba5 to disappear
Jul  1 11:22:24.322: INFO: Pod downwardapi-volume-39b07635-6b96-47a9-a911-fad6339b7ba5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:22:24.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8989" for this suite.

• [SLOW TEST:5.670 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1612,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:22:24.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
Jul  1 11:22:24.502: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config api-versions'
Jul  1 11:22:25.328: INFO: stderr: ""
Jul  1 11:22:25.328: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:22:25.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2287" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":275,"completed":95,"skipped":1661,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:22:25.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-d271be1d-fc1e-421b-a784-ddba0a6880e0
STEP: Creating a pod to test consume configMaps
Jul  1 11:22:25.612: INFO: Waiting up to 5m0s for pod "pod-configmaps-11474461-81f8-41c5-b7c7-e733ede1bf10" in namespace "configmap-8226" to be "Succeeded or Failed"
Jul  1 11:22:25.618: INFO: Pod "pod-configmaps-11474461-81f8-41c5-b7c7-e733ede1bf10": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054789ms
Jul  1 11:22:27.622: INFO: Pod "pod-configmaps-11474461-81f8-41c5-b7c7-e733ede1bf10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010044945s
Jul  1 11:22:29.771: INFO: Pod "pod-configmaps-11474461-81f8-41c5-b7c7-e733ede1bf10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158711535s
Jul  1 11:22:31.780: INFO: Pod "pod-configmaps-11474461-81f8-41c5-b7c7-e733ede1bf10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.167818571s
STEP: Saw pod success
Jul  1 11:22:31.780: INFO: Pod "pod-configmaps-11474461-81f8-41c5-b7c7-e733ede1bf10" satisfied condition "Succeeded or Failed"
Jul  1 11:22:31.783: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-11474461-81f8-41c5-b7c7-e733ede1bf10 container configmap-volume-test: 
STEP: delete the pod
Jul  1 11:22:31.827: INFO: Waiting for pod pod-configmaps-11474461-81f8-41c5-b7c7-e733ede1bf10 to disappear
Jul  1 11:22:31.879: INFO: Pod pod-configmaps-11474461-81f8-41c5-b7c7-e733ede1bf10 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:22:31.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8226" for this suite.

• [SLOW TEST:6.535 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1671,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:22:31.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-3ca594be-7550-4d6a-9075-88839cfa7a0b
STEP: Creating secret with name s-test-opt-upd-225e0730-b77d-4dfb-9010-d15e4cb4dbb4
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-3ca594be-7550-4d6a-9075-88839cfa7a0b
STEP: Updating secret s-test-opt-upd-225e0730-b77d-4dfb-9010-d15e4cb4dbb4
STEP: Creating secret with name s-test-opt-create-4ce0d6ea-a8ec-4312-8157-5cfb43d106a9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:22:40.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5345" for this suite.

• [SLOW TEST:8.583 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1672,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:22:40.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Jul  1 11:22:45.101: INFO: Successfully updated pod "annotationupdatedfefa4f2-5524-47e9-a88d-5199babf7f7b"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:22:47.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6534" for this suite.

• [SLOW TEST:6.707 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":98,"skipped":1675,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:22:47.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Jul  1 11:22:47.444: INFO: namespace kubectl-7459
Jul  1 11:22:47.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7459'
Jul  1 11:22:47.765: INFO: stderr: ""
Jul  1 11:22:47.765: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jul  1 11:22:48.771: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  1 11:22:48.771: INFO: Found 0 / 1
Jul  1 11:22:49.769: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  1 11:22:49.770: INFO: Found 0 / 1
Jul  1 11:22:50.770: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  1 11:22:50.770: INFO: Found 0 / 1
Jul  1 11:22:51.769: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  1 11:22:51.769: INFO: Found 1 / 1
Jul  1 11:22:51.769: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul  1 11:22:51.772: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  1 11:22:51.772: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul  1 11:22:51.772: INFO: wait on agnhost-master startup in kubectl-7459 
Jul  1 11:22:51.772: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs agnhost-master-dd7gj agnhost-master --namespace=kubectl-7459'
Jul  1 11:22:51.888: INFO: stderr: ""
Jul  1 11:22:51.888: INFO: stdout: "Paused\n"
STEP: exposing RC
Jul  1 11:22:51.888: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7459'
Jul  1 11:22:52.032: INFO: stderr: ""
Jul  1 11:22:52.032: INFO: stdout: "service/rm2 exposed\n"
Jul  1 11:22:52.050: INFO: Service rm2 in namespace kubectl-7459 found.
STEP: exposing service
Jul  1 11:22:54.057: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7459'
Jul  1 11:22:54.212: INFO: stderr: ""
Jul  1 11:22:54.212: INFO: stdout: "service/rm3 exposed\n"
Jul  1 11:22:54.262: INFO: Service rm3 in namespace kubectl-7459 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:22:56.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7459" for this suite.

• [SLOW TEST:9.099 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":275,"completed":99,"skipped":1680,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:22:56.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul  1 11:22:56.434: INFO: Waiting up to 5m0s for pod "pod-ad5b8310-bd02-47ba-8827-da15daf81864" in namespace "emptydir-8865" to be "Succeeded or Failed"
Jul  1 11:22:56.437: INFO: Pod "pod-ad5b8310-bd02-47ba-8827-da15daf81864": Phase="Pending", Reason="", readiness=false. Elapsed: 3.12515ms
Jul  1 11:22:58.508: INFO: Pod "pod-ad5b8310-bd02-47ba-8827-da15daf81864": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073638052s
Jul  1 11:23:00.606: INFO: Pod "pod-ad5b8310-bd02-47ba-8827-da15daf81864": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.171970051s
STEP: Saw pod success
Jul  1 11:23:00.606: INFO: Pod "pod-ad5b8310-bd02-47ba-8827-da15daf81864" satisfied condition "Succeeded or Failed"
Jul  1 11:23:00.610: INFO: Trying to get logs from node kali-worker pod pod-ad5b8310-bd02-47ba-8827-da15daf81864 container test-container: 
STEP: delete the pod
Jul  1 11:23:00.888: INFO: Waiting for pod pod-ad5b8310-bd02-47ba-8827-da15daf81864 to disappear
Jul  1 11:23:00.892: INFO: Pod pod-ad5b8310-bd02-47ba-8827-da15daf81864 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:23:00.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8865" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":100,"skipped":1687,"failed":0}

------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:23:00.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-2573f27b-1298-4d77-b922-acf782de43e9
STEP: Creating a pod to test consume configMaps
Jul  1 11:23:01.017: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-51bdc2ce-b9b6-4056-a63b-304361eaefb0" in namespace "projected-7533" to be "Succeeded or Failed"
Jul  1 11:23:01.021: INFO: Pod "pod-projected-configmaps-51bdc2ce-b9b6-4056-a63b-304361eaefb0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.402902ms
Jul  1 11:23:03.025: INFO: Pod "pod-projected-configmaps-51bdc2ce-b9b6-4056-a63b-304361eaefb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008197246s
Jul  1 11:23:05.030: INFO: Pod "pod-projected-configmaps-51bdc2ce-b9b6-4056-a63b-304361eaefb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012763822s
STEP: Saw pod success
Jul  1 11:23:05.030: INFO: Pod "pod-projected-configmaps-51bdc2ce-b9b6-4056-a63b-304361eaefb0" satisfied condition "Succeeded or Failed"
Jul  1 11:23:05.034: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-51bdc2ce-b9b6-4056-a63b-304361eaefb0 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  1 11:23:05.072: INFO: Waiting for pod pod-projected-configmaps-51bdc2ce-b9b6-4056-a63b-304361eaefb0 to disappear
Jul  1 11:23:05.078: INFO: Pod pod-projected-configmaps-51bdc2ce-b9b6-4056-a63b-304361eaefb0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:23:05.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7533" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1687,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:23:05.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:23:09.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8125" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1696,"failed":0}
SSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:23:09.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 11:23:09.276: INFO: Waiting up to 5m0s for pod "busybox-user-65534-1890c7a8-73c3-4964-9937-0cc56e24bca0" in namespace "security-context-test-5076" to be "Succeeded or Failed"
Jul  1 11:23:09.279: INFO: Pod "busybox-user-65534-1890c7a8-73c3-4964-9937-0cc56e24bca0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.606301ms
Jul  1 11:23:11.284: INFO: Pod "busybox-user-65534-1890c7a8-73c3-4964-9937-0cc56e24bca0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008016745s
Jul  1 11:23:13.289: INFO: Pod "busybox-user-65534-1890c7a8-73c3-4964-9937-0cc56e24bca0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012981822s
Jul  1 11:23:13.289: INFO: Pod "busybox-user-65534-1890c7a8-73c3-4964-9937-0cc56e24bca0" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:23:13.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5076" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1701,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:23:13.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul  1 11:23:13.478: INFO: Waiting up to 5m0s for pod "pod-73709741-79af-4bdf-a67b-4b2c8f02a048" in namespace "emptydir-1439" to be "Succeeded or Failed"
Jul  1 11:23:13.498: INFO: Pod "pod-73709741-79af-4bdf-a67b-4b2c8f02a048": Phase="Pending", Reason="", readiness=false. Elapsed: 19.851789ms
Jul  1 11:23:15.504: INFO: Pod "pod-73709741-79af-4bdf-a67b-4b2c8f02a048": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025250683s
Jul  1 11:23:17.507: INFO: Pod "pod-73709741-79af-4bdf-a67b-4b2c8f02a048": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028969692s
STEP: Saw pod success
Jul  1 11:23:17.507: INFO: Pod "pod-73709741-79af-4bdf-a67b-4b2c8f02a048" satisfied condition "Succeeded or Failed"
Jul  1 11:23:17.510: INFO: Trying to get logs from node kali-worker pod pod-73709741-79af-4bdf-a67b-4b2c8f02a048 container test-container: 
STEP: delete the pod
Jul  1 11:23:17.543: INFO: Waiting for pod pod-73709741-79af-4bdf-a67b-4b2c8f02a048 to disappear
Jul  1 11:23:17.558: INFO: Pod pod-73709741-79af-4bdf-a67b-4b2c8f02a048 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:23:17.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1439" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":1727,"failed":0}
SS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:23:17.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Jul  1 11:23:24.192: INFO: Successfully updated pod "adopt-release-5548p"
STEP: Checking that the Job readopts the Pod
Jul  1 11:23:24.192: INFO: Waiting up to 15m0s for pod "adopt-release-5548p" in namespace "job-973" to be "adopted"
Jul  1 11:23:24.256: INFO: Pod "adopt-release-5548p": Phase="Running", Reason="", readiness=true. Elapsed: 63.93264ms
Jul  1 11:23:26.260: INFO: Pod "adopt-release-5548p": Phase="Running", Reason="", readiness=true. Elapsed: 2.067743199s
Jul  1 11:23:26.260: INFO: Pod "adopt-release-5548p" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Jul  1 11:23:26.899: INFO: Successfully updated pod "adopt-release-5548p"
STEP: Checking that the Job releases the Pod
Jul  1 11:23:26.899: INFO: Waiting up to 15m0s for pod "adopt-release-5548p" in namespace "job-973" to be "released"
Jul  1 11:23:27.012: INFO: Pod "adopt-release-5548p": Phase="Running", Reason="", readiness=true. Elapsed: 112.253518ms
Jul  1 11:23:27.012: INFO: Pod "adopt-release-5548p" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:23:27.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-973" for this suite.

• [SLOW TEST:9.728 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":105,"skipped":1729,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:23:27.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 11:23:27.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul  1 11:23:30.332: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3780 create -f -'
Jul  1 11:23:34.324: INFO: stderr: ""
Jul  1 11:23:34.324: INFO: stdout: "e2e-test-crd-publish-openapi-6481-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jul  1 11:23:34.324: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3780 delete e2e-test-crd-publish-openapi-6481-crds test-cr'
Jul  1 11:23:34.541: INFO: stderr: ""
Jul  1 11:23:34.541: INFO: stdout: "e2e-test-crd-publish-openapi-6481-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Jul  1 11:23:34.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3780 apply -f -'
Jul  1 11:23:34.817: INFO: stderr: ""
Jul  1 11:23:34.817: INFO: stdout: "e2e-test-crd-publish-openapi-6481-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jul  1 11:23:34.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3780 delete e2e-test-crd-publish-openapi-6481-crds test-cr'
Jul  1 11:23:34.911: INFO: stderr: ""
Jul  1 11:23:34.911: INFO: stdout: "e2e-test-crd-publish-openapi-6481-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jul  1 11:23:34.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6481-crds'
Jul  1 11:23:35.209: INFO: stderr: ""
Jul  1 11:23:35.209: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6481-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:23:37.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3780" for this suite.

• [SLOW TEST:9.870 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":106,"skipped":1741,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:23:37.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Jul  1 11:23:37.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4502'
Jul  1 11:23:37.643: INFO: stderr: ""
Jul  1 11:23:37.643: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  1 11:23:37.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4502'
Jul  1 11:23:37.800: INFO: stderr: ""
Jul  1 11:23:37.800: INFO: stdout: "update-demo-nautilus-6vfz6 update-demo-nautilus-zf9qm "
Jul  1 11:23:37.800: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6vfz6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4502'
Jul  1 11:23:37.891: INFO: stderr: ""
Jul  1 11:23:37.892: INFO: stdout: ""
Jul  1 11:23:37.892: INFO: update-demo-nautilus-6vfz6 is created but not running
Jul  1 11:23:42.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4502'
Jul  1 11:23:43.009: INFO: stderr: ""
Jul  1 11:23:43.009: INFO: stdout: "update-demo-nautilus-6vfz6 update-demo-nautilus-zf9qm "
Jul  1 11:23:43.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6vfz6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4502'
Jul  1 11:23:43.095: INFO: stderr: ""
Jul  1 11:23:43.095: INFO: stdout: "true"
Jul  1 11:23:43.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6vfz6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4502'
Jul  1 11:23:43.192: INFO: stderr: ""
Jul  1 11:23:43.192: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  1 11:23:43.192: INFO: validating pod update-demo-nautilus-6vfz6
Jul  1 11:23:43.204: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  1 11:23:43.204: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  1 11:23:43.204: INFO: update-demo-nautilus-6vfz6 is verified up and running
Jul  1 11:23:43.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zf9qm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4502'
Jul  1 11:23:43.295: INFO: stderr: ""
Jul  1 11:23:43.295: INFO: stdout: "true"
Jul  1 11:23:43.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zf9qm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4502'
Jul  1 11:23:43.392: INFO: stderr: ""
Jul  1 11:23:43.392: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  1 11:23:43.392: INFO: validating pod update-demo-nautilus-zf9qm
Jul  1 11:23:43.413: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  1 11:23:43.413: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  1 11:23:43.413: INFO: update-demo-nautilus-zf9qm is verified up and running
STEP: using delete to clean up resources
Jul  1 11:23:43.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4502'
Jul  1 11:23:43.516: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  1 11:23:43.516: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul  1 11:23:43.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4502'
Jul  1 11:23:43.615: INFO: stderr: "No resources found in kubectl-4502 namespace.\n"
Jul  1 11:23:43.615: INFO: stdout: ""
Jul  1 11:23:43.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4502 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  1 11:23:43.708: INFO: stderr: ""
Jul  1 11:23:43.708: INFO: stdout: "update-demo-nautilus-6vfz6\nupdate-demo-nautilus-zf9qm\n"
Jul  1 11:23:44.208: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4502'
Jul  1 11:23:44.307: INFO: stderr: "No resources found in kubectl-4502 namespace.\n"
Jul  1 11:23:44.307: INFO: stdout: ""
Jul  1 11:23:44.307: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4502 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  1 11:23:44.495: INFO: stderr: ""
Jul  1 11:23:44.495: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:23:44.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4502" for this suite.

• [SLOW TEST:7.389 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":275,"completed":107,"skipped":1751,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:23:44.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:23:56.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9739" for this suite.

• [SLOW TEST:11.922 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":108,"skipped":1756,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:23:56.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-projected-gfjf
STEP: Creating a pod to test atomic-volume-subpath
Jul  1 11:23:56.609: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-gfjf" in namespace "subpath-5774" to be "Succeeded or Failed"
Jul  1 11:23:56.613: INFO: Pod "pod-subpath-test-projected-gfjf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051889ms
Jul  1 11:23:58.618: INFO: Pod "pod-subpath-test-projected-gfjf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008843104s
Jul  1 11:24:00.623: INFO: Pod "pod-subpath-test-projected-gfjf": Phase="Running", Reason="", readiness=true. Elapsed: 4.013564801s
Jul  1 11:24:02.627: INFO: Pod "pod-subpath-test-projected-gfjf": Phase="Running", Reason="", readiness=true. Elapsed: 6.01772155s
Jul  1 11:24:04.698: INFO: Pod "pod-subpath-test-projected-gfjf": Phase="Running", Reason="", readiness=true. Elapsed: 8.088310248s
Jul  1 11:24:06.702: INFO: Pod "pod-subpath-test-projected-gfjf": Phase="Running", Reason="", readiness=true. Elapsed: 10.092644737s
Jul  1 11:24:08.706: INFO: Pod "pod-subpath-test-projected-gfjf": Phase="Running", Reason="", readiness=true. Elapsed: 12.09623498s
Jul  1 11:24:10.726: INFO: Pod "pod-subpath-test-projected-gfjf": Phase="Running", Reason="", readiness=true. Elapsed: 14.116941889s
Jul  1 11:24:12.750: INFO: Pod "pod-subpath-test-projected-gfjf": Phase="Running", Reason="", readiness=true. Elapsed: 16.140552267s
Jul  1 11:24:14.753: INFO: Pod "pod-subpath-test-projected-gfjf": Phase="Running", Reason="", readiness=true. Elapsed: 18.143398607s
Jul  1 11:24:16.758: INFO: Pod "pod-subpath-test-projected-gfjf": Phase="Running", Reason="", readiness=true. Elapsed: 20.148113531s
Jul  1 11:24:18.761: INFO: Pod "pod-subpath-test-projected-gfjf": Phase="Running", Reason="", readiness=true. Elapsed: 22.151835786s
Jul  1 11:24:20.766: INFO: Pod "pod-subpath-test-projected-gfjf": Phase="Running", Reason="", readiness=true. Elapsed: 24.156344362s
Jul  1 11:24:22.770: INFO: Pod "pod-subpath-test-projected-gfjf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.160817296s
STEP: Saw pod success
Jul  1 11:24:22.770: INFO: Pod "pod-subpath-test-projected-gfjf" satisfied condition "Succeeded or Failed"
Jul  1 11:24:22.774: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-projected-gfjf container test-container-subpath-projected-gfjf: 
STEP: delete the pod
Jul  1 11:24:22.896: INFO: Waiting for pod pod-subpath-test-projected-gfjf to disappear
Jul  1 11:24:22.901: INFO: Pod pod-subpath-test-projected-gfjf no longer exists
STEP: Deleting pod pod-subpath-test-projected-gfjf
Jul  1 11:24:22.901: INFO: Deleting pod "pod-subpath-test-projected-gfjf" in namespace "subpath-5774"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:24:22.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5774" for this suite.

• [SLOW TEST:26.438 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":109,"skipped":1766,"failed":0}
S
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:24:22.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:24:23.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-2496" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":110,"skipped":1767,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:24:23.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-803414fb-9b51-425d-9993-e4f9b6aa0430
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:24:29.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3897" for this suite.

• [SLOW TEST:6.154 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":1769,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:24:29.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul  1 11:24:29.353: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90b4a6ae-6eda-40f0-b3f5-82ffff2a4fee" in namespace "projected-4086" to be "Succeeded or Failed"
Jul  1 11:24:29.373: INFO: Pod "downwardapi-volume-90b4a6ae-6eda-40f0-b3f5-82ffff2a4fee": Phase="Pending", Reason="", readiness=false. Elapsed: 19.455928ms
Jul  1 11:24:31.377: INFO: Pod "downwardapi-volume-90b4a6ae-6eda-40f0-b3f5-82ffff2a4fee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024285032s
Jul  1 11:24:33.382: INFO: Pod "downwardapi-volume-90b4a6ae-6eda-40f0-b3f5-82ffff2a4fee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028830019s
STEP: Saw pod success
Jul  1 11:24:33.382: INFO: Pod "downwardapi-volume-90b4a6ae-6eda-40f0-b3f5-82ffff2a4fee" satisfied condition "Succeeded or Failed"
Jul  1 11:24:33.385: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-90b4a6ae-6eda-40f0-b3f5-82ffff2a4fee container client-container: 
STEP: delete the pod
Jul  1 11:24:33.468: INFO: Waiting for pod downwardapi-volume-90b4a6ae-6eda-40f0-b3f5-82ffff2a4fee to disappear
Jul  1 11:24:33.474: INFO: Pod downwardapi-volume-90b4a6ae-6eda-40f0-b3f5-82ffff2a4fee no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:24:33.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4086" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1779,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:24:33.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-6b48ccf0-4d0a-4843-af94-460d9ab78bbe in namespace container-probe-3001
Jul  1 11:24:37.564: INFO: Started pod liveness-6b48ccf0-4d0a-4843-af94-460d9ab78bbe in namespace container-probe-3001
STEP: checking the pod's current state and verifying that restartCount is present
Jul  1 11:24:37.567: INFO: Initial restart count of pod liveness-6b48ccf0-4d0a-4843-af94-460d9ab78bbe is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:28:38.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3001" for this suite.

• [SLOW TEST:245.150 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":1780,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:28:38.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-b19a8068-dfd0-44a1-b5e0-b207b7f83f1e
STEP: Creating a pod to test consume configMaps
Jul  1 11:28:39.166: INFO: Waiting up to 5m0s for pod "pod-configmaps-d7fec300-f456-4bf9-b1a6-c34173189325" in namespace "configmap-6591" to be "Succeeded or Failed"
Jul  1 11:28:39.200: INFO: Pod "pod-configmaps-d7fec300-f456-4bf9-b1a6-c34173189325": Phase="Pending", Reason="", readiness=false. Elapsed: 34.261673ms
Jul  1 11:28:41.206: INFO: Pod "pod-configmaps-d7fec300-f456-4bf9-b1a6-c34173189325": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03984793s
Jul  1 11:28:43.397: INFO: Pod "pod-configmaps-d7fec300-f456-4bf9-b1a6-c34173189325": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.231461976s
STEP: Saw pod success
Jul  1 11:28:43.398: INFO: Pod "pod-configmaps-d7fec300-f456-4bf9-b1a6-c34173189325" satisfied condition "Succeeded or Failed"
Jul  1 11:28:43.401: INFO: Trying to get logs from node kali-worker pod pod-configmaps-d7fec300-f456-4bf9-b1a6-c34173189325 container configmap-volume-test: 
STEP: delete the pod
Jul  1 11:28:43.605: INFO: Waiting for pod pod-configmaps-d7fec300-f456-4bf9-b1a6-c34173189325 to disappear
Jul  1 11:28:43.804: INFO: Pod pod-configmaps-d7fec300-f456-4bf9-b1a6-c34173189325 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:28:43.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6591" for this suite.

• [SLOW TEST:5.180 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":114,"skipped":1808,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:28:43.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override arguments
Jul  1 11:28:44.340: INFO: Waiting up to 5m0s for pod "client-containers-3dd3a952-d1f4-41b6-822e-31c526752d18" in namespace "containers-4517" to be "Succeeded or Failed"
Jul  1 11:28:44.398: INFO: Pod "client-containers-3dd3a952-d1f4-41b6-822e-31c526752d18": Phase="Pending", Reason="", readiness=false. Elapsed: 58.054534ms
Jul  1 11:28:46.402: INFO: Pod "client-containers-3dd3a952-d1f4-41b6-822e-31c526752d18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062339552s
Jul  1 11:28:48.407: INFO: Pod "client-containers-3dd3a952-d1f4-41b6-822e-31c526752d18": Phase="Running", Reason="", readiness=true. Elapsed: 4.066981316s
Jul  1 11:28:50.411: INFO: Pod "client-containers-3dd3a952-d1f4-41b6-822e-31c526752d18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.071414761s
STEP: Saw pod success
Jul  1 11:28:50.411: INFO: Pod "client-containers-3dd3a952-d1f4-41b6-822e-31c526752d18" satisfied condition "Succeeded or Failed"
Jul  1 11:28:50.415: INFO: Trying to get logs from node kali-worker2 pod client-containers-3dd3a952-d1f4-41b6-822e-31c526752d18 container test-container: 
STEP: delete the pod
Jul  1 11:28:50.463: INFO: Waiting for pod client-containers-3dd3a952-d1f4-41b6-822e-31c526752d18 to disappear
Jul  1 11:28:50.476: INFO: Pod client-containers-3dd3a952-d1f4-41b6-822e-31c526752d18 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:28:50.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4517" for this suite.

• [SLOW TEST:6.670 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":115,"skipped":1836,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:28:50.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-3590
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul  1 11:28:50.552: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jul  1 11:28:50.662: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul  1 11:28:52.666: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul  1 11:28:54.667: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul  1 11:28:56.667: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 11:28:58.667: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 11:29:00.667: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 11:29:02.667: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 11:29:04.666: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jul  1 11:29:04.675: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jul  1 11:29:06.679: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jul  1 11:29:09.272: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jul  1 11:29:10.679: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jul  1 11:29:12.687: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jul  1 11:29:18.801: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.148 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3590 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 11:29:18.801: INFO: >>> kubeConfig: /root/.kube/config
I0701 11:29:18.836012       7 log.go:172] (0xc0027ca420) (0xc002247ae0) Create stream
I0701 11:29:18.836048       7 log.go:172] (0xc0027ca420) (0xc002247ae0) Stream added, broadcasting: 1
I0701 11:29:18.838068       7 log.go:172] (0xc0027ca420) Reply frame received for 1
I0701 11:29:18.838119       7 log.go:172] (0xc0027ca420) (0xc00044ab40) Create stream
I0701 11:29:18.838140       7 log.go:172] (0xc0027ca420) (0xc00044ab40) Stream added, broadcasting: 3
I0701 11:29:18.839170       7 log.go:172] (0xc0027ca420) Reply frame received for 3
I0701 11:29:18.839239       7 log.go:172] (0xc0027ca420) (0xc002247b80) Create stream
I0701 11:29:18.839265       7 log.go:172] (0xc0027ca420) (0xc002247b80) Stream added, broadcasting: 5
I0701 11:29:18.840230       7 log.go:172] (0xc0027ca420) Reply frame received for 5
I0701 11:29:19.922242       7 log.go:172] (0xc0027ca420) Data frame received for 3
I0701 11:29:19.922271       7 log.go:172] (0xc00044ab40) (3) Data frame handling
I0701 11:29:19.922297       7 log.go:172] (0xc00044ab40) (3) Data frame sent
I0701 11:29:19.922310       7 log.go:172] (0xc0027ca420) Data frame received for 3
I0701 11:29:19.922320       7 log.go:172] (0xc00044ab40) (3) Data frame handling
I0701 11:29:19.922615       7 log.go:172] (0xc0027ca420) Data frame received for 5
I0701 11:29:19.922649       7 log.go:172] (0xc002247b80) (5) Data frame handling
I0701 11:29:19.925107       7 log.go:172] (0xc0027ca420) Data frame received for 1
I0701 11:29:19.925341       7 log.go:172] (0xc002247ae0) (1) Data frame handling
I0701 11:29:19.925356       7 log.go:172] (0xc002247ae0) (1) Data frame sent
I0701 11:29:19.925373       7 log.go:172] (0xc0027ca420) (0xc002247ae0) Stream removed, broadcasting: 1
I0701 11:29:19.925428       7 log.go:172] (0xc0027ca420) Go away received
I0701 11:29:19.925487       7 log.go:172] (0xc0027ca420) (0xc002247ae0) Stream removed, broadcasting: 1
I0701 11:29:19.925503       7 log.go:172] (0xc0027ca420) (0xc00044ab40) Stream removed, broadcasting: 3
I0701 11:29:19.925517       7 log.go:172] (0xc0027ca420) (0xc002247b80) Stream removed, broadcasting: 5
Jul  1 11:29:19.925: INFO: Found all expected endpoints: [netserver-0]
Jul  1 11:29:19.939: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.150 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3590 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 11:29:19.939: INFO: >>> kubeConfig: /root/.kube/config
I0701 11:29:19.964485       7 log.go:172] (0xc002eca580) (0xc0013ee820) Create stream
I0701 11:29:19.964521       7 log.go:172] (0xc002eca580) (0xc0013ee820) Stream added, broadcasting: 1
I0701 11:29:19.966230       7 log.go:172] (0xc002eca580) Reply frame received for 1
I0701 11:29:19.966288       7 log.go:172] (0xc002eca580) (0xc0013eeaa0) Create stream
I0701 11:29:19.966299       7 log.go:172] (0xc002eca580) (0xc0013eeaa0) Stream added, broadcasting: 3
I0701 11:29:19.967161       7 log.go:172] (0xc002eca580) Reply frame received for 3
I0701 11:29:19.967204       7 log.go:172] (0xc002eca580) (0xc002247cc0) Create stream
I0701 11:29:19.967220       7 log.go:172] (0xc002eca580) (0xc002247cc0) Stream added, broadcasting: 5
I0701 11:29:19.967909       7 log.go:172] (0xc002eca580) Reply frame received for 5
I0701 11:29:21.026330       7 log.go:172] (0xc002eca580) Data frame received for 3
I0701 11:29:21.026372       7 log.go:172] (0xc0013eeaa0) (3) Data frame handling
I0701 11:29:21.026396       7 log.go:172] (0xc0013eeaa0) (3) Data frame sent
I0701 11:29:21.026420       7 log.go:172] (0xc002eca580) Data frame received for 3
I0701 11:29:21.026436       7 log.go:172] (0xc0013eeaa0) (3) Data frame handling
I0701 11:29:21.026548       7 log.go:172] (0xc002eca580) Data frame received for 5
I0701 11:29:21.026658       7 log.go:172] (0xc002247cc0) (5) Data frame handling
I0701 11:29:21.028371       7 log.go:172] (0xc002eca580) Data frame received for 1
I0701 11:29:21.028383       7 log.go:172] (0xc0013ee820) (1) Data frame handling
I0701 11:29:21.028394       7 log.go:172] (0xc0013ee820) (1) Data frame sent
I0701 11:29:21.028402       7 log.go:172] (0xc002eca580) (0xc0013ee820) Stream removed, broadcasting: 1
I0701 11:29:21.028471       7 log.go:172] (0xc002eca580) (0xc0013ee820) Stream removed, broadcasting: 1
I0701 11:29:21.028481       7 log.go:172] (0xc002eca580) (0xc0013eeaa0) Stream removed, broadcasting: 3
I0701 11:29:21.028587       7 log.go:172] (0xc002eca580) (0xc002247cc0) Stream removed, broadcasting: 5
I0701 11:29:21.028701       7 log.go:172] (0xc002eca580) Go away received
Jul  1 11:29:21.028: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:29:21.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3590" for this suite.

• [SLOW TEST:30.552 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":116,"skipped":1868,"failed":0}
S
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:29:21.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-8916
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-8916
I0701 11:29:21.753836       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8916, replica count: 2
I0701 11:29:24.804354       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0701 11:29:27.804593       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  1 11:29:27.804: INFO: Creating new exec pod
Jul  1 11:29:32.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8916 execpodnvvr6 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jul  1 11:29:33.208: INFO: stderr: "I0701 11:29:33.102807    1398 log.go:172] (0xc000b43b80) (0xc0009c8a00) Create stream\nI0701 11:29:33.102867    1398 log.go:172] (0xc000b43b80) (0xc0009c8a00) Stream added, broadcasting: 1\nI0701 11:29:33.108586    1398 log.go:172] (0xc000b43b80) Reply frame received for 1\nI0701 11:29:33.108625    1398 log.go:172] (0xc000b43b80) (0xc000603680) Create stream\nI0701 11:29:33.108635    1398 log.go:172] (0xc000b43b80) (0xc000603680) Stream added, broadcasting: 3\nI0701 11:29:33.109894    1398 log.go:172] (0xc000b43b80) Reply frame received for 3\nI0701 11:29:33.109921    1398 log.go:172] (0xc000b43b80) (0xc00051caa0) Create stream\nI0701 11:29:33.109928    1398 log.go:172] (0xc000b43b80) (0xc00051caa0) Stream added, broadcasting: 5\nI0701 11:29:33.110796    1398 log.go:172] (0xc000b43b80) Reply frame received for 5\nI0701 11:29:33.199023    1398 log.go:172] (0xc000b43b80) Data frame received for 5\nI0701 11:29:33.199061    1398 log.go:172] (0xc00051caa0) (5) Data frame handling\nI0701 11:29:33.199087    1398 log.go:172] (0xc00051caa0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0701 11:29:33.199339    1398 log.go:172] (0xc000b43b80) Data frame received for 5\nI0701 11:29:33.199358    1398 log.go:172] (0xc00051caa0) (5) Data frame handling\nI0701 11:29:33.199371    1398 log.go:172] (0xc00051caa0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0701 11:29:33.199658    1398 log.go:172] (0xc000b43b80) Data frame received for 3\nI0701 11:29:33.199670    1398 log.go:172] (0xc000603680) (3) Data frame handling\nI0701 11:29:33.199782    1398 log.go:172] (0xc000b43b80) Data frame received for 5\nI0701 11:29:33.199811    1398 log.go:172] (0xc00051caa0) (5) Data frame handling\nI0701 11:29:33.201745    1398 log.go:172] (0xc000b43b80) Data frame received for 1\nI0701 11:29:33.201759    1398 log.go:172] (0xc0009c8a00) (1) Data frame handling\nI0701 11:29:33.201765    1398 log.go:172] (0xc0009c8a00) (1) Data frame sent\nI0701 11:29:33.201775    1398 log.go:172] (0xc000b43b80) (0xc0009c8a00) Stream removed, broadcasting: 1\nI0701 11:29:33.201786    1398 log.go:172] (0xc000b43b80) Go away received\nI0701 11:29:33.202280    1398 log.go:172] (0xc000b43b80) (0xc0009c8a00) Stream removed, broadcasting: 1\nI0701 11:29:33.202317    1398 log.go:172] (0xc000b43b80) (0xc000603680) Stream removed, broadcasting: 3\nI0701 11:29:33.202361    1398 log.go:172] (0xc000b43b80) (0xc00051caa0) Stream removed, broadcasting: 5\n"
Jul  1 11:29:33.208: INFO: stdout: ""
Jul  1 11:29:33.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8916 execpodnvvr6 -- /bin/sh -x -c nc -zv -t -w 2 10.108.232.204 80'
Jul  1 11:29:33.428: INFO: stderr: "I0701 11:29:33.358760    1419 log.go:172] (0xc00044c8f0) (0xc0006255e0) Create stream\nI0701 11:29:33.358843    1419 log.go:172] (0xc00044c8f0) (0xc0006255e0) Stream added, broadcasting: 1\nI0701 11:29:33.361446    1419 log.go:172] (0xc00044c8f0) Reply frame received for 1\nI0701 11:29:33.361491    1419 log.go:172] (0xc00044c8f0) (0xc000a38000) Create stream\nI0701 11:29:33.361501    1419 log.go:172] (0xc00044c8f0) (0xc000a38000) Stream added, broadcasting: 3\nI0701 11:29:33.362326    1419 log.go:172] (0xc00044c8f0) Reply frame received for 3\nI0701 11:29:33.362359    1419 log.go:172] (0xc00044c8f0) (0xc000625680) Create stream\nI0701 11:29:33.362370    1419 log.go:172] (0xc00044c8f0) (0xc000625680) Stream added, broadcasting: 5\nI0701 11:29:33.363165    1419 log.go:172] (0xc00044c8f0) Reply frame received for 5\nI0701 11:29:33.419794    1419 log.go:172] (0xc00044c8f0) Data frame received for 3\nI0701 11:29:33.419829    1419 log.go:172] (0xc000a38000) (3) Data frame handling\nI0701 11:29:33.419849    1419 log.go:172] (0xc00044c8f0) Data frame received for 5\nI0701 11:29:33.419857    1419 log.go:172] (0xc000625680) (5) Data frame handling\nI0701 11:29:33.419866    1419 log.go:172] (0xc000625680) (5) Data frame sent\nI0701 11:29:33.419883    1419 log.go:172] (0xc00044c8f0) Data frame received for 5\nI0701 11:29:33.419890    1419 log.go:172] (0xc000625680) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.232.204 80\nConnection to 10.108.232.204 80 port [tcp/http] succeeded!\nI0701 11:29:33.421619    1419 log.go:172] (0xc00044c8f0) Data frame received for 1\nI0701 11:29:33.421651    1419 log.go:172] (0xc0006255e0) (1) Data frame handling\nI0701 11:29:33.421680    1419 log.go:172] (0xc0006255e0) (1) Data frame sent\nI0701 11:29:33.421697    1419 log.go:172] (0xc00044c8f0) (0xc0006255e0) Stream removed, broadcasting: 1\nI0701 11:29:33.421714    1419 log.go:172] (0xc00044c8f0) Go away received\nI0701 11:29:33.421988    1419 log.go:172] (0xc00044c8f0) (0xc0006255e0) Stream removed, broadcasting: 1\nI0701 11:29:33.422072    1419 log.go:172] (0xc00044c8f0) (0xc000a38000) Stream removed, broadcasting: 3\nI0701 11:29:33.422084    1419 log.go:172] (0xc00044c8f0) (0xc000625680) Stream removed, broadcasting: 5\n"
Jul  1 11:29:33.428: INFO: stdout: ""
Jul  1 11:29:33.428: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:29:33.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8916" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:12.472 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":117,"skipped":1869,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:29:33.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0701 11:30:14.610335       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  1 11:30:14.610: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:30:14.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-846" for this suite.

• [SLOW TEST:41.107 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":118,"skipped":1875,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:30:14.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  1 11:30:14.799: INFO: Waiting up to 5m0s for pod "pod-5aff3476-4c3b-455e-a951-32eb5b388090" in namespace "emptydir-7117" to be "Succeeded or Failed"
Jul  1 11:30:14.841: INFO: Pod "pod-5aff3476-4c3b-455e-a951-32eb5b388090": Phase="Pending", Reason="", readiness=false. Elapsed: 42.206664ms
Jul  1 11:30:16.845: INFO: Pod "pod-5aff3476-4c3b-455e-a951-32eb5b388090": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045984993s
Jul  1 11:30:18.849: INFO: Pod "pod-5aff3476-4c3b-455e-a951-32eb5b388090": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050374154s
STEP: Saw pod success
Jul  1 11:30:18.849: INFO: Pod "pod-5aff3476-4c3b-455e-a951-32eb5b388090" satisfied condition "Succeeded or Failed"
Jul  1 11:30:18.852: INFO: Trying to get logs from node kali-worker pod pod-5aff3476-4c3b-455e-a951-32eb5b388090 container test-container: 
STEP: delete the pod
Jul  1 11:30:19.151: INFO: Waiting for pod pod-5aff3476-4c3b-455e-a951-32eb5b388090 to disappear
Jul  1 11:30:19.296: INFO: Pod pod-5aff3476-4c3b-455e-a951-32eb5b388090 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:30:19.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7117" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":119,"skipped":1886,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:30:19.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul  1 11:30:19.708: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8b924a3-6f1a-490b-a177-bbb6568cd70c" in namespace "downward-api-2774" to be "Succeeded or Failed"
Jul  1 11:30:19.779: INFO: Pod "downwardapi-volume-d8b924a3-6f1a-490b-a177-bbb6568cd70c": Phase="Pending", Reason="", readiness=false. Elapsed: 71.076698ms
Jul  1 11:30:22.009: INFO: Pod "downwardapi-volume-d8b924a3-6f1a-490b-a177-bbb6568cd70c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300723576s
Jul  1 11:30:24.134: INFO: Pod "downwardapi-volume-d8b924a3-6f1a-490b-a177-bbb6568cd70c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.426257932s
Jul  1 11:30:26.453: INFO: Pod "downwardapi-volume-d8b924a3-6f1a-490b-a177-bbb6568cd70c": Phase="Running", Reason="", readiness=true. Elapsed: 6.744839919s
Jul  1 11:30:28.457: INFO: Pod "downwardapi-volume-d8b924a3-6f1a-490b-a177-bbb6568cd70c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.748616991s
STEP: Saw pod success
Jul  1 11:30:28.457: INFO: Pod "downwardapi-volume-d8b924a3-6f1a-490b-a177-bbb6568cd70c" satisfied condition "Succeeded or Failed"
Jul  1 11:30:28.459: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-d8b924a3-6f1a-490b-a177-bbb6568cd70c container client-container: 
STEP: delete the pod
Jul  1 11:30:28.496: INFO: Waiting for pod downwardapi-volume-d8b924a3-6f1a-490b-a177-bbb6568cd70c to disappear
Jul  1 11:30:28.510: INFO: Pod downwardapi-volume-d8b924a3-6f1a-490b-a177-bbb6568cd70c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:30:28.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2774" for this suite.

• [SLOW TEST:9.171 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":120,"skipped":1939,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:30:28.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5467.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5467.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5467.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5467.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5467.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5467.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  1 11:30:36.813: INFO: DNS probes using dns-5467/dns-test-f8f4010d-0721-4ccf-866f-4540cb032b76 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:30:36.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5467" for this suite.

• [SLOW TEST:8.429 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":121,"skipped":1951,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:30:36.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-247
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-247
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-247
Jul  1 11:30:37.756: INFO: Found 0 stateful pods, waiting for 1
Jul  1 11:30:47.761: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jul  1 11:30:47.764: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-247 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  1 11:30:48.028: INFO: stderr: "I0701 11:30:47.894720    1441 log.go:172] (0xc0009a3130) (0xc00097c460) Create stream\nI0701 11:30:47.894766    1441 log.go:172] (0xc0009a3130) (0xc00097c460) Stream added, broadcasting: 1\nI0701 11:30:47.899131    1441 log.go:172] (0xc0009a3130) Reply frame received for 1\nI0701 11:30:47.899166    1441 log.go:172] (0xc0009a3130) (0xc000639680) Create stream\nI0701 11:30:47.899176    1441 log.go:172] (0xc0009a3130) (0xc000639680) Stream added, broadcasting: 3\nI0701 11:30:47.900059    1441 log.go:172] (0xc0009a3130) Reply frame received for 3\nI0701 11:30:47.900100    1441 log.go:172] (0xc0009a3130) (0xc000558aa0) Create stream\nI0701 11:30:47.900112    1441 log.go:172] (0xc0009a3130) (0xc000558aa0) Stream added, broadcasting: 5\nI0701 11:30:47.900844    1441 log.go:172] (0xc0009a3130) Reply frame received for 5\nI0701 11:30:47.958016    1441 log.go:172] (0xc0009a3130) Data frame received for 5\nI0701 11:30:47.958048    1441 log.go:172] (0xc000558aa0) (5) Data frame handling\nI0701 11:30:47.958072    1441 log.go:172] (0xc000558aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 11:30:48.020669    1441 log.go:172] (0xc0009a3130) Data frame received for 3\nI0701 11:30:48.020685    1441 log.go:172] (0xc000639680) (3) Data frame handling\nI0701 11:30:48.020696    1441 log.go:172] (0xc000639680) (3) Data frame sent\nI0701 11:30:48.020994    1441 log.go:172] (0xc0009a3130) Data frame received for 5\nI0701 11:30:48.021007    1441 log.go:172] (0xc000558aa0) (5) Data frame handling\nI0701 11:30:48.021028    1441 log.go:172] (0xc0009a3130) Data frame received for 3\nI0701 11:30:48.021042    1441 log.go:172] (0xc000639680) (3) Data frame handling\nI0701 11:30:48.023364    1441 log.go:172] (0xc0009a3130) Data frame received for 1\nI0701 11:30:48.023383    1441 log.go:172] (0xc00097c460) (1) Data frame handling\nI0701 11:30:48.023395    1441 log.go:172] (0xc00097c460) (1) Data frame sent\nI0701 11:30:48.023409    1441 log.go:172] (0xc0009a3130) (0xc00097c460) Stream removed, broadcasting: 1\nI0701 11:30:48.023583    1441 log.go:172] (0xc0009a3130) Go away received\nI0701 11:30:48.023624    1441 log.go:172] (0xc0009a3130) (0xc00097c460) Stream removed, broadcasting: 1\nI0701 11:30:48.023643    1441 log.go:172] (0xc0009a3130) (0xc000639680) Stream removed, broadcasting: 3\nI0701 11:30:48.023650    1441 log.go:172] (0xc0009a3130) (0xc000558aa0) Stream removed, broadcasting: 5\n"
Jul  1 11:30:48.028: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  1 11:30:48.028: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  1 11:30:48.057: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul  1 11:30:58.061: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  1 11:30:58.061: INFO: Waiting for statefulset status.replicas updated to 0
Jul  1 11:30:58.092: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998957s
Jul  1 11:30:59.096: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.980375887s
Jul  1 11:31:00.102: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.975891914s
Jul  1 11:31:01.107: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.970636517s
Jul  1 11:31:02.112: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.965138612s
Jul  1 11:31:03.117: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.959995603s
Jul  1 11:31:04.121: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.955120376s
Jul  1 11:31:05.159: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.951450676s
Jul  1 11:31:06.162: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.913475347s
Jul  1 11:31:07.166: INFO: Verifying statefulset ss doesn't scale past 1 for another 910.052642ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-247
Jul  1 11:31:08.170: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-247 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 11:31:08.389: INFO: stderr: "I0701 11:31:08.303479    1461 log.go:172] (0xc000b73290) (0xc000a4c8c0) Create stream\nI0701 11:31:08.303604    1461 log.go:172] (0xc000b73290) (0xc000a4c8c0) Stream added, broadcasting: 1\nI0701 11:31:08.308859    1461 log.go:172] (0xc000b73290) Reply frame received for 1\nI0701 11:31:08.308927    1461 log.go:172] (0xc000b73290) (0xc0006a5680) Create stream\nI0701 11:31:08.308962    1461 log.go:172] (0xc000b73290) (0xc0006a5680) Stream added, broadcasting: 3\nI0701 11:31:08.310158    1461 log.go:172] (0xc000b73290) Reply frame received for 3\nI0701 11:31:08.310203    1461 log.go:172] (0xc000b73290) (0xc00054aaa0) Create stream\nI0701 11:31:08.310217    1461 log.go:172] (0xc000b73290) (0xc00054aaa0) Stream added, broadcasting: 5\nI0701 11:31:08.311220    1461 log.go:172] (0xc000b73290) Reply frame received for 5\nI0701 11:31:08.379152    1461 log.go:172] (0xc000b73290) Data frame received for 3\nI0701 11:31:08.379190    1461 log.go:172] (0xc0006a5680) (3) Data frame handling\nI0701 11:31:08.379201    1461 log.go:172] (0xc0006a5680) (3) Data frame sent\nI0701 11:31:08.379210    1461 log.go:172] (0xc000b73290) Data frame received for 3\nI0701 11:31:08.379217    1461 log.go:172] (0xc0006a5680) (3) Data frame handling\nI0701 11:31:08.379248    1461 log.go:172] (0xc000b73290) Data frame received for 5\nI0701 11:31:08.379259    1461 log.go:172] (0xc00054aaa0) (5) Data frame handling\nI0701 11:31:08.379282    1461 log.go:172] (0xc00054aaa0) (5) Data frame sent\nI0701 11:31:08.379292    1461 log.go:172] (0xc000b73290) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 11:31:08.379299    1461 log.go:172] (0xc00054aaa0) (5) Data frame handling\nI0701 11:31:08.380667    1461 log.go:172] (0xc000b73290) Data frame received for 1\nI0701 11:31:08.380698    1461 log.go:172] (0xc000a4c8c0) (1) Data frame handling\nI0701 11:31:08.380716    1461 log.go:172] (0xc000a4c8c0) (1) Data frame sent\nI0701 11:31:08.380738    1461 log.go:172] (0xc000b73290) (0xc000a4c8c0) Stream removed, broadcasting: 1\nI0701 11:31:08.380758    1461 log.go:172] (0xc000b73290) Go away received\nI0701 11:31:08.381358    1461 log.go:172] (0xc000b73290) (0xc000a4c8c0) Stream removed, broadcasting: 1\nI0701 11:31:08.381381    1461 log.go:172] (0xc000b73290) (0xc0006a5680) Stream removed, broadcasting: 3\nI0701 11:31:08.381393    1461 log.go:172] (0xc000b73290) (0xc00054aaa0) Stream removed, broadcasting: 5\n"
Jul  1 11:31:08.389: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  1 11:31:08.389: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  1 11:31:08.393: INFO: Found 1 stateful pods, waiting for 3
Jul  1 11:31:18.399: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:31:18.399: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:31:18.399: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jul  1 11:31:18.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-247 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  1 11:31:18.626: INFO: stderr: "I0701 11:31:18.542111    1483 log.go:172] (0xc0003cd080) (0xc0009460a0) Create stream\nI0701 11:31:18.542162    1483 log.go:172] (0xc0003cd080) (0xc0009460a0) Stream added, broadcasting: 1\nI0701 11:31:18.544102    1483 log.go:172] (0xc0003cd080) Reply frame received for 1\nI0701 11:31:18.544143    1483 log.go:172] (0xc0003cd080) (0xc0009461e0) Create stream\nI0701 11:31:18.544157    1483 log.go:172] (0xc0003cd080) (0xc0009461e0) Stream added, broadcasting: 3\nI0701 11:31:18.544862    1483 log.go:172] (0xc0003cd080) Reply frame received for 3\nI0701 11:31:18.544891    1483 log.go:172] (0xc0003cd080) (0xc0008aa000) Create stream\nI0701 11:31:18.544900    1483 log.go:172] (0xc0003cd080) (0xc0008aa000) Stream added, broadcasting: 5\nI0701 11:31:18.545872    1483 log.go:172] (0xc0003cd080) Reply frame received for 5\nI0701 11:31:18.618492    1483 log.go:172] (0xc0003cd080) Data frame received for 5\nI0701 11:31:18.618533    1483 log.go:172] (0xc0008aa000) (5) Data frame handling\nI0701 11:31:18.618549    1483 log.go:172] (0xc0008aa000) (5) Data frame sent\nI0701 11:31:18.618561    1483 log.go:172] (0xc0003cd080) Data frame received for 5\nI0701 11:31:18.618572    1483 log.go:172] (0xc0008aa000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 11:31:18.618598    1483 log.go:172] (0xc0003cd080) Data frame received for 3\nI0701 11:31:18.618611    1483 log.go:172] (0xc0009461e0) (3) Data frame handling\nI0701 11:31:18.618630    1483 log.go:172] (0xc0009461e0) (3) Data frame sent\nI0701 11:31:18.618643    1483 log.go:172] (0xc0003cd080) Data frame received for 3\nI0701 11:31:18.618656    1483 log.go:172] (0xc0009461e0) (3) Data frame handling\nI0701 11:31:18.620201    1483 log.go:172] (0xc0003cd080) Data frame received for 1\nI0701 11:31:18.620223    1483 log.go:172] (0xc0009460a0) (1) Data frame handling\nI0701 11:31:18.620243    1483 log.go:172] (0xc0009460a0) (1) Data frame sent\nI0701 11:31:18.620259    1483 log.go:172] (0xc0003cd080) (0xc0009460a0) Stream removed, broadcasting: 1\nI0701 11:31:18.620316    1483 log.go:172] (0xc0003cd080) Go away received\nI0701 11:31:18.620519    1483 log.go:172] (0xc0003cd080) (0xc0009460a0) Stream removed, broadcasting: 1\nI0701 11:31:18.620532    1483 log.go:172] (0xc0003cd080) (0xc0009461e0) Stream removed, broadcasting: 3\nI0701 11:31:18.620541    1483 log.go:172] (0xc0003cd080) (0xc0008aa000) Stream removed, broadcasting: 5\n"
Jul  1 11:31:18.626: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  1 11:31:18.626: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  1 11:31:18.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-247 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  1 11:31:18.952: INFO: stderr: "I0701 11:31:18.764748    1505 log.go:172] (0xc00003a840) (0xc0006db720) Create stream\nI0701 11:31:18.764810    1505 log.go:172] (0xc00003a840) (0xc0006db720) Stream added, broadcasting: 1\nI0701 11:31:18.767886    1505 log.go:172] (0xc00003a840) Reply frame received for 1\nI0701 11:31:18.767947    1505 log.go:172] (0xc00003a840) (0xc000ae6000) Create stream\nI0701 11:31:18.767968    1505 log.go:172] (0xc00003a840) (0xc000ae6000) Stream added, broadcasting: 3\nI0701 11:31:18.768757    1505 log.go:172] (0xc00003a840) Reply frame received for 3\nI0701 11:31:18.768785    1505 log.go:172] (0xc00003a840) (0xc00009a000) Create stream\nI0701 11:31:18.768793    1505 log.go:172] (0xc00003a840) (0xc00009a000) Stream added, broadcasting: 5\nI0701 11:31:18.769804    1505 log.go:172] (0xc00003a840) Reply frame received for 5\nI0701 11:31:18.824506    1505 log.go:172] (0xc00003a840) Data frame received for 5\nI0701 11:31:18.824534    1505 log.go:172] (0xc00009a000) (5) Data frame handling\nI0701 11:31:18.824559    1505 log.go:172] (0xc00009a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 11:31:18.941080    1505 log.go:172] (0xc00003a840) Data frame received for 5\nI0701 11:31:18.941249    1505 log.go:172] (0xc00009a000) (5) Data frame handling\nI0701 11:31:18.941274    1505 log.go:172] (0xc00003a840) Data frame received for 3\nI0701 11:31:18.941282    1505 log.go:172] (0xc000ae6000) (3) Data frame handling\nI0701 11:31:18.941290    1505 log.go:172] (0xc000ae6000) (3) Data frame sent\nI0701 11:31:18.941298    1505 log.go:172] (0xc00003a840) Data frame received for 3\nI0701 11:31:18.941307    1505 log.go:172] (0xc000ae6000) (3) Data frame handling\nI0701 11:31:18.943270    1505 log.go:172] (0xc00003a840) Data frame received for 1\nI0701 11:31:18.943295    1505 log.go:172] (0xc0006db720) (1) Data frame handling\nI0701 11:31:18.943310    1505 log.go:172] (0xc0006db720) (1) Data frame sent\nI0701 11:31:18.943336    1505 log.go:172] (0xc00003a840) (0xc0006db720) Stream removed, broadcasting: 1\nI0701 11:31:18.943491    1505 log.go:172] (0xc00003a840) Go away received\nI0701 11:31:18.943952    1505 log.go:172] (0xc00003a840) (0xc0006db720) Stream removed, broadcasting: 1\nI0701 11:31:18.943976    1505 log.go:172] (0xc00003a840) (0xc000ae6000) Stream removed, broadcasting: 3\nI0701 11:31:18.943989    1505 log.go:172] (0xc00003a840) (0xc00009a000) Stream removed, broadcasting: 5\n"
Jul  1 11:31:18.952: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  1 11:31:18.952: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  1 11:31:18.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-247 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  1 11:31:19.188: INFO: stderr: "I0701 11:31:19.073078    1527 log.go:172] (0xc000ace000) (0xc000715220) Create stream\nI0701 11:31:19.073341    1527 log.go:172] (0xc000ace000) (0xc000715220) Stream added, broadcasting: 1\nI0701 11:31:19.074774    1527 log.go:172] (0xc000ace000) Reply frame received for 1\nI0701 11:31:19.074802    1527 log.go:172] (0xc000ace000) (0xc000966000) Create stream\nI0701 11:31:19.074811    1527 log.go:172] (0xc000ace000) (0xc000966000) Stream added, broadcasting: 3\nI0701 11:31:19.075710    1527 log.go:172] (0xc000ace000) Reply frame received for 3\nI0701 11:31:19.075749    1527 log.go:172] (0xc000ace000) (0xc0009660a0) Create stream\nI0701 11:31:19.075771    1527 log.go:172] (0xc000ace000) (0xc0009660a0) Stream added, broadcasting: 5\nI0701 11:31:19.076450    1527 log.go:172] (0xc000ace000) Reply frame received for 5\nI0701 11:31:19.143159    1527 log.go:172] (0xc000ace000) Data frame received for 5\nI0701 11:31:19.143190    1527 log.go:172] (0xc0009660a0) (5) Data frame handling\nI0701 11:31:19.143213    1527 log.go:172] (0xc0009660a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 11:31:19.178872    1527 log.go:172] (0xc000ace000) Data frame received for 3\nI0701 11:31:19.178908    1527 log.go:172] (0xc000966000) (3) Data frame handling\nI0701 11:31:19.178921    1527 log.go:172] (0xc000966000) (3) Data frame sent\nI0701 11:31:19.178947    1527 log.go:172] (0xc000ace000) Data frame received for 5\nI0701 11:31:19.178993    1527 log.go:172] (0xc0009660a0) (5) Data frame handling\nI0701 11:31:19.179023    1527 log.go:172] (0xc000ace000) Data frame received for 3\nI0701 11:31:19.179038    1527 log.go:172] (0xc000966000) (3) Data frame handling\nI0701 11:31:19.180853    1527 log.go:172] (0xc000ace000) Data frame received for 1\nI0701 11:31:19.180870    1527 log.go:172] (0xc000715220) (1) Data frame handling\nI0701 11:31:19.180881    1527 log.go:172] (0xc000715220) (1) Data frame sent\nI0701 11:31:19.181053    1527 log.go:172] (0xc000ace000) (0xc000715220) Stream removed, broadcasting: 1\nI0701 11:31:19.181258    1527 log.go:172] (0xc000ace000) Go away received\nI0701 11:31:19.181422    1527 log.go:172] (0xc000ace000) (0xc000715220) Stream removed, broadcasting: 1\nI0701 11:31:19.181446    1527 log.go:172] (0xc000ace000) (0xc000966000) Stream removed, broadcasting: 3\nI0701 11:31:19.181463    1527 log.go:172] (0xc000ace000) (0xc0009660a0) Stream removed, broadcasting: 5\n"
Jul  1 11:31:19.188: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  1 11:31:19.188: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  1 11:31:19.188: INFO: Waiting for statefulset status.replicas updated to 0
Jul  1 11:31:19.191: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Jul  1 11:31:29.200: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  1 11:31:29.200: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul  1 11:31:29.200: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul  1 11:31:29.227: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999135s
Jul  1 11:31:30.231: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.984688263s
Jul  1 11:31:31.243: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.980866845s
Jul  1 11:31:32.274: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.968160134s
Jul  1 11:31:33.279: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.937919158s
Jul  1 11:31:34.284: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.932755642s
Jul  1 11:31:35.289: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.927227598s
Jul  1 11:31:36.293: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.922617182s
Jul  1 11:31:37.327: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.91833264s
Jul  1 11:31:38.333: INFO: Verifying statefulset ss doesn't scale past 3 for another 884.23248ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-247
Jul  1 11:31:39.338: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-247 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 11:31:39.560: INFO: stderr: "I0701 11:31:39.463336    1547 log.go:172] (0xc0000e8420) (0xc0004debe0) Create stream\nI0701 11:31:39.463385    1547 log.go:172] (0xc0000e8420) (0xc0004debe0) Stream added, broadcasting: 1\nI0701 11:31:39.465381    1547 log.go:172] (0xc0000e8420) Reply frame received for 1\nI0701 11:31:39.465413    1547 log.go:172] (0xc0000e8420) (0xc00097e0a0) Create stream\nI0701 11:31:39.465423    1547 log.go:172] (0xc0000e8420) (0xc00097e0a0) Stream added, broadcasting: 3\nI0701 11:31:39.466232    1547 log.go:172] (0xc0000e8420) Reply frame received for 3\nI0701 11:31:39.466263    1547 log.go:172] (0xc0000e8420) (0xc000916000) Create stream\nI0701 11:31:39.466277    1547 log.go:172] (0xc0000e8420) (0xc000916000) Stream added, broadcasting: 5\nI0701 11:31:39.467497    1547 log.go:172] (0xc0000e8420) Reply frame received for 5\nI0701 11:31:39.551731    1547 log.go:172] (0xc0000e8420) Data frame received for 5\nI0701 11:31:39.551772    1547 log.go:172] (0xc000916000) (5) Data frame handling\nI0701 11:31:39.551786    1547 log.go:172] (0xc000916000) (5) Data frame sent\nI0701 11:31:39.551798    1547 log.go:172] (0xc0000e8420) Data frame received for 5\nI0701 11:31:39.551808    1547 log.go:172] (0xc000916000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 11:31:39.551834    1547 log.go:172] (0xc0000e8420) Data frame received for 3\nI0701 11:31:39.551844    1547 log.go:172] (0xc00097e0a0) (3) Data frame handling\nI0701 11:31:39.551855    1547 log.go:172] (0xc00097e0a0) (3) Data frame sent\nI0701 11:31:39.551872    1547 log.go:172] (0xc0000e8420) Data frame received for 3\nI0701 11:31:39.551881    1547 log.go:172] (0xc00097e0a0) (3) Data frame handling\nI0701 11:31:39.552937    1547 log.go:172] (0xc0000e8420) Data frame received for 1\nI0701 11:31:39.552968    1547 log.go:172] (0xc0004debe0) (1) Data frame handling\nI0701 11:31:39.552982    1547 log.go:172] (0xc0004debe0) (1) Data frame sent\nI0701 11:31:39.553011    1547 log.go:172] (0xc0000e8420) (0xc0004debe0) Stream removed, broadcasting: 1\nI0701 11:31:39.553027    1547 log.go:172] (0xc0000e8420) Go away received\nI0701 11:31:39.553413    1547 log.go:172] (0xc0000e8420) (0xc0004debe0) Stream removed, broadcasting: 1\nI0701 11:31:39.553430    1547 log.go:172] (0xc0000e8420) (0xc00097e0a0) Stream removed, broadcasting: 3\nI0701 11:31:39.553438    1547 log.go:172] (0xc0000e8420) (0xc000916000) Stream removed, broadcasting: 5\n"
Jul  1 11:31:39.560: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  1 11:31:39.560: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  1 11:31:39.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-247 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 11:31:39.756: INFO: stderr: "I0701 11:31:39.680167    1568 log.go:172] (0xc000926c60) (0xc000ae6140) Create stream\nI0701 11:31:39.680217    1568 log.go:172] (0xc000926c60) (0xc000ae6140) Stream added, broadcasting: 1\nI0701 11:31:39.682692    1568 log.go:172] (0xc000926c60) Reply frame received for 1\nI0701 11:31:39.682729    1568 log.go:172] (0xc000926c60) (0xc00067f2c0) Create stream\nI0701 11:31:39.682746    1568 log.go:172] (0xc000926c60) (0xc00067f2c0) Stream added, broadcasting: 3\nI0701 11:31:39.683822    1568 log.go:172] (0xc000926c60) Reply frame received for 3\nI0701 11:31:39.683863    1568 log.go:172] (0xc000926c60) (0xc000338000) Create stream\nI0701 11:31:39.683877    1568 log.go:172] (0xc000926c60) (0xc000338000) Stream added, broadcasting: 5\nI0701 11:31:39.684664    1568 log.go:172] (0xc000926c60) Reply frame received for 5\nI0701 11:31:39.746663    1568 log.go:172] (0xc000926c60) Data frame received for 5\nI0701 11:31:39.746690    1568 log.go:172] (0xc000338000) (5) Data frame handling\nI0701 11:31:39.746698    1568 log.go:172] (0xc000338000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 11:31:39.746712    1568 log.go:172] (0xc000926c60) Data frame received for 3\nI0701 11:31:39.746717    1568 log.go:172] (0xc00067f2c0) (3) Data frame handling\nI0701 11:31:39.746722    1568 log.go:172] (0xc00067f2c0) (3) Data frame sent\nI0701 11:31:39.746728    1568 log.go:172] (0xc000926c60) Data frame received for 3\nI0701 11:31:39.746732    1568 log.go:172] (0xc00067f2c0) (3) Data frame handling\nI0701 11:31:39.746774    1568 log.go:172] (0xc000926c60) Data frame received for 5\nI0701 11:31:39.746803    1568 log.go:172] (0xc000338000) (5) Data frame handling\nI0701 11:31:39.748334    1568 log.go:172] (0xc000926c60) Data frame received for 1\nI0701 11:31:39.748348    1568 log.go:172] (0xc000ae6140) (1) Data frame handling\nI0701 11:31:39.748354    1568 log.go:172] (0xc000ae6140) (1) Data frame sent\nI0701 11:31:39.748364    1568 log.go:172] (0xc000926c60) (0xc000ae6140) Stream removed, broadcasting: 1\nI0701 11:31:39.748372    1568 log.go:172] (0xc000926c60) Go away received\nI0701 11:31:39.748753    1568 log.go:172] (0xc000926c60) (0xc000ae6140) Stream removed, broadcasting: 1\nI0701 11:31:39.748783    1568 log.go:172] (0xc000926c60) (0xc00067f2c0) Stream removed, broadcasting: 3\nI0701 11:31:39.748801    1568 log.go:172] (0xc000926c60) (0xc000338000) Stream removed, broadcasting: 5\n"
Jul  1 11:31:39.756: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  1 11:31:39.756: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  1 11:31:39.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-247 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 11:31:39.968: INFO: stderr: "I0701 11:31:39.879874    1589 log.go:172] (0xc000af6bb0) (0xc0007f8320) Create stream\nI0701 11:31:39.879950    1589 log.go:172] (0xc000af6bb0) (0xc0007f8320) Stream added, broadcasting: 1\nI0701 11:31:39.883300    1589 log.go:172] (0xc000af6bb0) Reply frame received for 1\nI0701 11:31:39.883351    1589 log.go:172] (0xc000af6bb0) (0xc000438be0) Create stream\nI0701 11:31:39.883375    1589 log.go:172] (0xc000af6bb0) (0xc000438be0) Stream added, broadcasting: 3\nI0701 11:31:39.884602    1589 log.go:172] (0xc000af6bb0) Reply frame received for 3\nI0701 11:31:39.884629    1589 log.go:172] (0xc000af6bb0) (0xc000438c80) Create stream\nI0701 11:31:39.884638    1589 log.go:172] (0xc000af6bb0) (0xc000438c80) Stream added, broadcasting: 5\nI0701 11:31:39.885754    1589 log.go:172] (0xc000af6bb0) Reply frame received for 5\nI0701 11:31:39.957672    1589 log.go:172] (0xc000af6bb0) Data frame received for 3\nI0701 11:31:39.957703    1589 log.go:172] (0xc000438be0) (3) Data frame handling\nI0701 11:31:39.957724    1589 log.go:172] (0xc000438be0) (3) Data frame sent\nI0701 11:31:39.957870    1589 log.go:172] (0xc000af6bb0) Data frame received for 3\nI0701 11:31:39.957896    1589 log.go:172] (0xc000438be0) (3) Data frame handling\nI0701 11:31:39.957914    1589 log.go:172] (0xc000af6bb0) Data frame received for 5\nI0701 11:31:39.957926    1589 log.go:172] (0xc000438c80) (5) Data frame handling\nI0701 11:31:39.957932    1589 log.go:172] (0xc000438c80) (5) Data frame sent\nI0701 11:31:39.957937    1589 log.go:172] (0xc000af6bb0) Data frame received for 5\nI0701 11:31:39.957941    1589 log.go:172] (0xc000438c80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 11:31:39.959290    1589 log.go:172] (0xc000af6bb0) Data frame received for 1\nI0701 11:31:39.959305    1589 log.go:172] (0xc0007f8320) (1) Data frame handling\nI0701 11:31:39.959313    1589 log.go:172] (0xc0007f8320) (1) Data frame sent\nI0701 11:31:39.959321    1589 log.go:172] (0xc000af6bb0) (0xc0007f8320) Stream removed, broadcasting: 1\nI0701 11:31:39.959330    1589 log.go:172] (0xc000af6bb0) Go away received\nI0701 11:31:39.959835    1589 log.go:172] (0xc000af6bb0) (0xc0007f8320) Stream removed, broadcasting: 1\nI0701 11:31:39.959861    1589 log.go:172] (0xc000af6bb0) (0xc000438be0) Stream removed, broadcasting: 3\nI0701 11:31:39.959870    1589 log.go:172] (0xc000af6bb0) (0xc000438c80) Stream removed, broadcasting: 5\n"
Jul  1 11:31:39.968: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  1 11:31:39.968: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  1 11:31:39.968: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul  1 11:32:19.984: INFO: Deleting all statefulset in ns statefulset-247
Jul  1 11:32:19.987: INFO: Scaling statefulset ss to 0
Jul  1 11:32:19.998: INFO: Waiting for statefulset status.replicas updated to 0
Jul  1 11:32:20.000: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:32:20.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-247" for this suite.

• [SLOW TEST:103.241 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":122,"skipped":1991,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:32:20.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 11:32:20.467: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-41bb4337-8aa8-4930-b5bf-95311d1e2e25" in namespace "security-context-test-7464" to be "Succeeded or Failed"
Jul  1 11:32:20.478: INFO: Pod "alpine-nnp-false-41bb4337-8aa8-4930-b5bf-95311d1e2e25": Phase="Pending", Reason="", readiness=false. Elapsed: 11.46981ms
Jul  1 11:32:22.483: INFO: Pod "alpine-nnp-false-41bb4337-8aa8-4930-b5bf-95311d1e2e25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015772417s
Jul  1 11:32:24.513: INFO: Pod "alpine-nnp-false-41bb4337-8aa8-4930-b5bf-95311d1e2e25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046229055s
Jul  1 11:32:24.513: INFO: Pod "alpine-nnp-false-41bb4337-8aa8-4930-b5bf-95311d1e2e25" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:32:24.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7464" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":123,"skipped":2000,"failed":0}
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:32:24.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-2780
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul  1 11:32:24.587: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jul  1 11:32:24.654: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul  1 11:32:26.658: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul  1 11:32:28.658: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 11:32:30.659: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 11:32:32.658: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 11:32:34.659: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 11:32:36.659: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 11:32:38.658: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 11:32:40.659: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jul  1 11:32:40.665: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jul  1 11:32:42.669: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jul  1 11:32:48.728: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.161:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2780 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 11:32:48.728: INFO: >>> kubeConfig: /root/.kube/config
I0701 11:32:48.765062       7 log.go:172] (0xc002eca580) (0xc000b477c0) Create stream
I0701 11:32:48.765087       7 log.go:172] (0xc002eca580) (0xc000b477c0) Stream added, broadcasting: 1
I0701 11:32:48.766826       7 log.go:172] (0xc002eca580) Reply frame received for 1
I0701 11:32:48.766861       7 log.go:172] (0xc002eca580) (0xc0012ec000) Create stream
I0701 11:32:48.766875       7 log.go:172] (0xc002eca580) (0xc0012ec000) Stream added, broadcasting: 3
I0701 11:32:48.767881       7 log.go:172] (0xc002eca580) Reply frame received for 3
I0701 11:32:48.767954       7 log.go:172] (0xc002eca580) (0xc001ecc0a0) Create stream
I0701 11:32:48.767975       7 log.go:172] (0xc002eca580) (0xc001ecc0a0) Stream added, broadcasting: 5
I0701 11:32:48.768948       7 log.go:172] (0xc002eca580) Reply frame received for 5
I0701 11:32:48.922616       7 log.go:172] (0xc002eca580) Data frame received for 3
I0701 11:32:48.922661       7 log.go:172] (0xc0012ec000) (3) Data frame handling
I0701 11:32:48.922702       7 log.go:172] (0xc0012ec000) (3) Data frame sent
I0701 11:32:48.922730       7 log.go:172] (0xc002eca580) Data frame received for 3
I0701 11:32:48.922749       7 log.go:172] (0xc0012ec000) (3) Data frame handling
I0701 11:32:48.923396       7 log.go:172] (0xc002eca580) Data frame received for 5
I0701 11:32:48.923450       7 log.go:172] (0xc001ecc0a0) (5) Data frame handling
I0701 11:32:48.928646       7 log.go:172] (0xc002eca580) Data frame received for 1
I0701 11:32:48.928686       7 log.go:172] (0xc000b477c0) (1) Data frame handling
I0701 11:32:48.928706       7 log.go:172] (0xc000b477c0) (1) Data frame sent
I0701 11:32:48.928724       7 log.go:172] (0xc002eca580) (0xc000b477c0) Stream removed, broadcasting: 1
I0701 11:32:48.928753       7 log.go:172] (0xc002eca580) Go away received
I0701 11:32:48.928863       7 log.go:172] (0xc002eca580) (0xc000b477c0) Stream removed, broadcasting: 1
I0701 11:32:48.928905       7 log.go:172] (0xc002eca580) (0xc0012ec000) Stream removed, broadcasting: 3
I0701 11:32:48.928925       7 log.go:172] (0xc002eca580) (0xc001ecc0a0) Stream removed, broadcasting: 5
Jul  1 11:32:48.928: INFO: Found all expected endpoints: [netserver-0]
Jul  1 11:32:48.932: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.160:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2780 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 11:32:48.932: INFO: >>> kubeConfig: /root/.kube/config
I0701 11:32:48.962189       7 log.go:172] (0xc002ecabb0) (0xc000101e00) Create stream
I0701 11:32:48.962219       7 log.go:172] (0xc002ecabb0) (0xc000101e00) Stream added, broadcasting: 1
I0701 11:32:48.963955       7 log.go:172] (0xc002ecabb0) Reply frame received for 1
I0701 11:32:48.963986       7 log.go:172] (0xc002ecabb0) (0xc001ecc460) Create stream
I0701 11:32:48.964002       7 log.go:172] (0xc002ecabb0) (0xc001ecc460) Stream added, broadcasting: 3
I0701 11:32:48.965014       7 log.go:172] (0xc002ecabb0) Reply frame received for 3
I0701 11:32:48.965052       7 log.go:172] (0xc002ecabb0) (0xc0011a10e0) Create stream
I0701 11:32:48.965070       7 log.go:172] (0xc002ecabb0) (0xc0011a10e0) Stream added, broadcasting: 5
I0701 11:32:48.966277       7 log.go:172] (0xc002ecabb0) Reply frame received for 5
I0701 11:32:49.045340       7 log.go:172] (0xc002ecabb0) Data frame received for 3
I0701 11:32:49.045362       7 log.go:172] (0xc001ecc460) (3) Data frame handling
I0701 11:32:49.045374       7 log.go:172] (0xc001ecc460) (3) Data frame sent
I0701 11:32:49.045379       7 log.go:172] (0xc002ecabb0) Data frame received for 3
I0701 11:32:49.045383       7 log.go:172] (0xc001ecc460) (3) Data frame handling
I0701 11:32:49.045562       7 log.go:172] (0xc002ecabb0) Data frame received for 5
I0701 11:32:49.045584       7 log.go:172] (0xc0011a10e0) (5) Data frame handling
I0701 11:32:49.047285       7 log.go:172] (0xc002ecabb0) Data frame received for 1
I0701 11:32:49.047319       7 log.go:172] (0xc000101e00) (1) Data frame handling
I0701 11:32:49.047347       7 log.go:172] (0xc000101e00) (1) Data frame sent
I0701 11:32:49.047374       7 log.go:172] (0xc002ecabb0) (0xc000101e00) Stream removed, broadcasting: 1
I0701 11:32:49.047402       7 log.go:172] (0xc002ecabb0) Go away received
I0701 11:32:49.047534       7 log.go:172] (0xc002ecabb0) (0xc000101e00) Stream removed, broadcasting: 1
I0701 11:32:49.047574       7 log.go:172] (0xc002ecabb0) (0xc001ecc460) Stream removed, broadcasting: 3
I0701 11:32:49.047592       7 log.go:172] (0xc002ecabb0) (0xc0011a10e0) Stream removed, broadcasting: 5
Jul  1 11:32:49.047: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:32:49.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2780" for this suite.

• [SLOW TEST:24.512 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":2007,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:32:49.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:32:49.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7420" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":125,"skipped":2015,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:32:49.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul  1 11:32:49.270: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5967258c-4de5-4cd9-82dc-1e3f8cf6fed6" in namespace "projected-3406" to be "Succeeded or Failed"
Jul  1 11:32:49.275: INFO: Pod "downwardapi-volume-5967258c-4de5-4cd9-82dc-1e3f8cf6fed6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.136206ms
Jul  1 11:32:51.280: INFO: Pod "downwardapi-volume-5967258c-4de5-4cd9-82dc-1e3f8cf6fed6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009862687s
Jul  1 11:32:53.284: INFO: Pod "downwardapi-volume-5967258c-4de5-4cd9-82dc-1e3f8cf6fed6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014481988s
STEP: Saw pod success
Jul  1 11:32:53.284: INFO: Pod "downwardapi-volume-5967258c-4de5-4cd9-82dc-1e3f8cf6fed6" satisfied condition "Succeeded or Failed"
Jul  1 11:32:53.287: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-5967258c-4de5-4cd9-82dc-1e3f8cf6fed6 container client-container: 
STEP: delete the pod
Jul  1 11:32:53.359: INFO: Waiting for pod downwardapi-volume-5967258c-4de5-4cd9-82dc-1e3f8cf6fed6 to disappear
Jul  1 11:32:53.365: INFO: Pod downwardapi-volume-5967258c-4de5-4cd9-82dc-1e3f8cf6fed6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:32:53.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3406" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":126,"skipped":2071,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:32:53.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-3fb9abed-1350-432f-90fd-cf68da3e1bcc
STEP: Creating a pod to test consume secrets
Jul  1 11:32:53.506: INFO: Waiting up to 5m0s for pod "pod-secrets-46b88342-8ddb-49a8-8fce-001b5bb7789f" in namespace "secrets-157" to be "Succeeded or Failed"
Jul  1 11:32:53.509: INFO: Pod "pod-secrets-46b88342-8ddb-49a8-8fce-001b5bb7789f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.958092ms
Jul  1 11:32:55.831: INFO: Pod "pod-secrets-46b88342-8ddb-49a8-8fce-001b5bb7789f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32554679s
Jul  1 11:32:57.836: INFO: Pod "pod-secrets-46b88342-8ddb-49a8-8fce-001b5bb7789f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330081983s
Jul  1 11:32:59.840: INFO: Pod "pod-secrets-46b88342-8ddb-49a8-8fce-001b5bb7789f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.334853813s
STEP: Saw pod success
Jul  1 11:32:59.840: INFO: Pod "pod-secrets-46b88342-8ddb-49a8-8fce-001b5bb7789f" satisfied condition "Succeeded or Failed"
Jul  1 11:32:59.844: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-46b88342-8ddb-49a8-8fce-001b5bb7789f container secret-volume-test: 
STEP: delete the pod
Jul  1 11:32:59.922: INFO: Waiting for pod pod-secrets-46b88342-8ddb-49a8-8fce-001b5bb7789f to disappear
Jul  1 11:32:59.928: INFO: Pod pod-secrets-46b88342-8ddb-49a8-8fce-001b5bb7789f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:32:59.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-157" for this suite.

• [SLOW TEST:6.562 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":2079,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:32:59.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jul  1 11:33:00.745: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jul  1 11:33:02.799: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729199980, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729199980, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729199980, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729199980, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  1 11:33:05.851: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 11:33:05.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:33:07.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-1282" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:7.258 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":128,"skipped":2139,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:33:07.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Jul  1 11:33:07.236: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:33:15.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8372" for this suite.

• [SLOW TEST:7.856 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":129,"skipped":2153,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:33:15.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 11:33:15.125: INFO: Creating deployment "test-recreate-deployment"
Jul  1 11:33:15.139: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jul  1 11:33:15.163: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jul  1 11:33:17.171: INFO: Waiting deployment "test-recreate-deployment" to complete
Jul  1 11:33:17.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729199995, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729199995, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729199995, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729199995, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:33:19.179: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jul  1 11:33:19.188: INFO: Updating deployment test-recreate-deployment
Jul  1 11:33:19.188: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jul  1 11:33:19.746: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-3020 /apis/apps/v1/namespaces/deployment-3020/deployments/test-recreate-deployment c21e56ab-b2f8-4861-b3f8-855e8a36221a 16795230 2 2020-07-01 11:33:15 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-07-01 11:33:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-01 11:33:19 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0028395c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-01 11:33:19 +0000 UTC,LastTransitionTime:2020-07-01 11:33:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-07-01 11:33:19 +0000 UTC,LastTransitionTime:2020-07-01 11:33:15 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Jul  1 11:33:19.811: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7  deployment-3020 /apis/apps/v1/namespaces/deployment-3020/replicasets/test-recreate-deployment-d5667d9c7 1afdf596-77d0-4a15-90ef-af72c313596f 16795228 1 2020-07-01 11:33:19 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment c21e56ab-b2f8-4861-b3f8-855e8a36221a 0xc002839fc0 0xc002839fc1}] []  [{kube-controller-manager Update apps/v1 2020-07-01 11:33:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 49 101 53 54 97 98 45 98 50 102 56 45 52 56 54 49 45 98 51 102 56 45 56 53 53 101 56 97 51 54 50 50 49 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027bc048  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul  1 11:33:19.811: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jul  1 11:33:19.812: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c  deployment-3020 /apis/apps/v1/namespaces/deployment-3020/replicasets/test-recreate-deployment-74d98b5f7c c390b259-94e5-4a36-9dcc-3aa9f5ccb215 16795219 2 2020-07-01 11:33:15 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment c21e56ab-b2f8-4861-b3f8-855e8a36221a 0xc002839e37 0xc002839e38}] []  [{kube-controller-manager Update apps/v1 2020-07-01 11:33:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 49 101 53 54 97 98 45 98 50 102 56 45 52 56 54 49 45 98 51 102 56 45 56 53 53 101 56 97 51 54 50 50 49 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002839ec8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul  1 11:33:19.815: INFO: Pod "test-recreate-deployment-d5667d9c7-g7rcn" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-g7rcn test-recreate-deployment-d5667d9c7- deployment-3020 /api/v1/namespaces/deployment-3020/pods/test-recreate-deployment-d5667d9c7-g7rcn 1e00cce6-6f28-40d0-8ffa-82dd858ae2a3 16795231 0 2020-07-01 11:33:19 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 1afdf596-77d0-4a15-90ef-af72c313596f 0xc0027bc510 0xc0027bc511}] []  [{kube-controller-manager Update v1 2020-07-01 11:33:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 97 102 100 102 53 57 54 45 55 55 100 48 45 52 97 49 53 45 57 48 101 102 45 97 102 55 50 99 51 49 51 53 57 54 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 11:33:19 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whmd5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whmd5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whmd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 11:33:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 11:33:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 11:33:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 11:33:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-07-01 11:33:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:33:19.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3020" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":130,"skipped":2269,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:33:19.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:33:20.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7567" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":275,"completed":131,"skipped":2281,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:33:20.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:33:36.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2055" for this suite.

• [SLOW TEST:16.805 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":132,"skipped":2290,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:33:36.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 11:33:36.884: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jul  1 11:33:36.949: INFO: Pod name sample-pod: Found 0 pods out of 1
Jul  1 11:33:41.975: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul  1 11:33:41.975: INFO: Creating deployment "test-rolling-update-deployment"
Jul  1 11:33:41.992: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jul  1 11:33:42.003: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jul  1 11:33:44.011: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jul  1 11:33:44.014: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200022, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200022, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200022, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200022, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:33:46.018: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jul  1 11:33:46.027: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-6807 /apis/apps/v1/namespaces/deployment-6807/deployments/test-rolling-update-deployment 0e6f3f0a-1225-410d-8b77-34c2acfd4c70 16795456 1 2020-07-01 11:33:41 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  [{e2e.test Update apps/v1 2020-07-01 11:33:41 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-01 11:33:45 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005bbe0e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-01 11:33:42 +0000 UTC,LastTransitionTime:2020-07-01 11:33:42 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-07-01 11:33:45 +0000 UTC,LastTransitionTime:2020-07-01 11:33:42 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jul  1 11:33:46.030: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7  deployment-6807 /apis/apps/v1/namespaces/deployment-6807/replicasets/test-rolling-update-deployment-59d5cb45c7 d42e6086-067d-4f21-adcc-64ad1c430a6b 16795445 1 2020-07-01 11:33:42 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 0e6f3f0a-1225-410d-8b77-34c2acfd4c70 0xc002bbaad7 0xc002bbaad8}] []  [{kube-controller-manager Update apps/v1 2020-07-01 11:33:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 101 54 102 51 102 48 97 45 49 50 50 53 45 52 49 48 100 45 56 98 55 55 45 51 52 99 50 97 99 102 100 52 99 55 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002bbab68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jul  1 11:33:46.030: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jul  1 11:33:46.030: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-6807 /apis/apps/v1/namespaces/deployment-6807/replicasets/test-rolling-update-controller 62681c7a-00b0-41eb-8855-10974f50bd1b 16795455 2 2020-07-01 11:33:36 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 0e6f3f0a-1225-410d-8b77-34c2acfd4c70 0xc002bba9bf 0xc002bba9d0}] []  [{e2e.test Update apps/v1 2020-07-01 11:33:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-01 11:33:45 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 101 54 102 51 102 48 97 45 49 50 50 53 45 52 49 48 100 45 56 98 55 55 45 51 52 99 50 97 99 102 100 52 99 55 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002bbaa68  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul  1 11:33:46.034: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-kgjbh" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-kgjbh test-rolling-update-deployment-59d5cb45c7- deployment-6807 /api/v1/namespaces/deployment-6807/pods/test-rolling-update-deployment-59d5cb45c7-kgjbh 320f3c6c-289b-4f9e-b697-36296457767e 16795444 0 2020-07-01 11:33:42 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 d42e6086-067d-4f21-adcc-64ad1c430a6b 0xc005bbe527 0xc005bbe528}] []  [{kube-controller-manager Update v1 2020-07-01 11:33:42 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 50 101 54 48 56 54 45 48 54 55 100 45 52 102 50 49 45 97 100 99 99 45 54 52 97 100 49 99 52 51 48 97 54 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 11:33:45 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 54 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wvhfn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wvhfn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wvhfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 11:33:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 11:33:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 11:33:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 11:33:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.166,StartTime:2020-07-01 11:33:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 11:33:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://b4c4f75614d1533818dd144c984cc04357b41e82a2cf922b18bf58c9e16e490e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.166,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:33:46.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6807" for this suite.

• [SLOW TEST:9.221 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":133,"skipped":2320,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:33:46.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:34:02.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-847" for this suite.

• [SLOW TEST:16.246 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":134,"skipped":2335,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:34:02.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  1 11:34:03.697: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  1 11:34:05.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:34:07.920: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:34:09.710: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:34:11.868: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:34:13.879: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200043, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  1 11:34:17.114: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:34:19.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3024" for this suite.
STEP: Destroying namespace "webhook-3024-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.772 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":135,"skipped":2367,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:34:21.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9466.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9466.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9466.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9466.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9466.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9466.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9466.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9466.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9466.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9466.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  1 11:34:32.275: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:32.278: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:32.281: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:32.283: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:32.290: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:32.293: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:32.295: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:32.297: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:32.302: INFO: Lookups using dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9466.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9466.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local jessie_udp@dns-test-service-2.dns-9466.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9466.svc.cluster.local]

Jul  1 11:34:37.311: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:37.313: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:37.316: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:37.318: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:37.326: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:37.329: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:37.332: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:37.334: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:37.340: INFO: Lookups using dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9466.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9466.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local jessie_udp@dns-test-service-2.dns-9466.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9466.svc.cluster.local]

Jul  1 11:34:42.307: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:42.310: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:42.313: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:42.315: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:42.324: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:42.327: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:42.330: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:42.333: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:42.338: INFO: Lookups using dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9466.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9466.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local jessie_udp@dns-test-service-2.dns-9466.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9466.svc.cluster.local]

Jul  1 11:34:47.306: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:47.310: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:47.312: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:47.340: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:47.349: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:47.351: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:47.354: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:47.356: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:47.361: INFO: Lookups using dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9466.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9466.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local jessie_udp@dns-test-service-2.dns-9466.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9466.svc.cluster.local]

Jul  1 11:34:52.306: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:52.309: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:52.312: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:52.314: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:52.322: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:52.324: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:52.327: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:52.329: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:52.335: INFO: Lookups using dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9466.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9466.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local jessie_udp@dns-test-service-2.dns-9466.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9466.svc.cluster.local]

Jul  1 11:34:57.305: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:57.308: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:57.310: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:57.313: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:57.321: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:57.324: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:57.326: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:57.328: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9466.svc.cluster.local from pod dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f: the server could not find the requested resource (get pods dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f)
Jul  1 11:34:57.332: INFO: Lookups using dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9466.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9466.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9466.svc.cluster.local jessie_udp@dns-test-service-2.dns-9466.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9466.svc.cluster.local]

Jul  1 11:35:02.335: INFO: DNS probes using dns-9466/dns-test-eda08178-3d07-4608-91de-acfa7cf29a8f succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:35:02.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9466" for this suite.

• [SLOW TEST:44.928 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":136,"skipped":2390,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:35:05.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-0520388d-9c14-45ac-95a3-62431db64f9c
STEP: Creating a pod to test consume configMaps
Jul  1 11:35:09.962: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8" in namespace "projected-4311" to be "Succeeded or Failed"
Jul  1 11:35:10.856: INFO: Pod "pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8": Phase="Pending", Reason="", readiness=false. Elapsed: 893.111331ms
Jul  1 11:35:12.868: INFO: Pod "pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.905070575s
Jul  1 11:35:15.306: INFO: Pod "pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.343511859s
Jul  1 11:35:17.572: INFO: Pod "pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.609627756s
Jul  1 11:35:19.575: INFO: Pod "pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.612350485s
Jul  1 11:35:21.578: INFO: Pod "pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.615481845s
Jul  1 11:35:23.646: INFO: Pod "pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.683599069s
Jul  1 11:35:27.067: INFO: Pod "pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.104605311s
Jul  1 11:35:29.335: INFO: Pod "pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.37235967s
Jul  1 11:35:33.336: INFO: Pod "pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8": Phase="Pending", Reason="", readiness=false. Elapsed: 23.373048193s
Jul  1 11:35:35.340: INFO: Pod "pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8": Phase="Pending", Reason="", readiness=false. Elapsed: 25.377571517s
Jul  1 11:35:37.568: INFO: Pod "pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8": Phase="Pending", Reason="", readiness=false. Elapsed: 27.605527176s
Jul  1 11:35:39.666: INFO: Pod "pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.7039183s
Jul  1 11:35:41.670: INFO: Pod "pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8": Phase="Pending", Reason="", readiness=false. Elapsed: 31.707740978s
Jul  1 11:35:43.674: INFO: Pod "pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.71186329s
STEP: Saw pod success
Jul  1 11:35:43.674: INFO: Pod "pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8" satisfied condition "Succeeded or Failed"
Jul  1 11:35:43.677: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  1 11:35:44.647: INFO: Waiting for pod pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8 to disappear
Jul  1 11:35:44.659: INFO: Pod pod-projected-configmaps-c589e8b8-5915-4a21-b528-cb1ac0dbbef8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:35:44.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4311" for this suite.

• [SLOW TEST:38.691 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2411,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:35:44.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul  1 11:35:44.816: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:35:44.826: INFO: Number of nodes with available pods: 0
Jul  1 11:35:44.826: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:35:45.831: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:35:45.834: INFO: Number of nodes with available pods: 0
Jul  1 11:35:45.834: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:35:47.883: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:35:47.943: INFO: Number of nodes with available pods: 0
Jul  1 11:35:47.943: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:35:49.870: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:35:49.917: INFO: Number of nodes with available pods: 0
Jul  1 11:35:49.917: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:35:51.111: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:35:51.222: INFO: Number of nodes with available pods: 0
Jul  1 11:35:51.222: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:35:51.935: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:35:51.938: INFO: Number of nodes with available pods: 0
Jul  1 11:35:51.938: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:35:53.575: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:35:53.671: INFO: Number of nodes with available pods: 0
Jul  1 11:35:53.671: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:35:53.892: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:35:53.895: INFO: Number of nodes with available pods: 0
Jul  1 11:35:53.895: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:35:54.832: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:35:54.835: INFO: Number of nodes with available pods: 0
Jul  1 11:35:54.835: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:35:55.988: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:35:56.227: INFO: Number of nodes with available pods: 0
Jul  1 11:35:56.227: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:35:56.830: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:35:56.832: INFO: Number of nodes with available pods: 0
Jul  1 11:35:56.832: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:35:57.831: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:35:57.835: INFO: Number of nodes with available pods: 2
Jul  1 11:35:57.835: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jul  1 11:35:57.852: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:35:57.854: INFO: Number of nodes with available pods: 1
Jul  1 11:35:57.854: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:35:58.858: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:35:58.862: INFO: Number of nodes with available pods: 1
Jul  1 11:35:58.862: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:35:59.880: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:35:59.883: INFO: Number of nodes with available pods: 1
Jul  1 11:35:59.883: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:36:00.865: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:36:01.054: INFO: Number of nodes with available pods: 1
Jul  1 11:36:01.054: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:36:01.858: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:36:01.862: INFO: Number of nodes with available pods: 1
Jul  1 11:36:01.862: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:36:02.877: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:36:02.880: INFO: Number of nodes with available pods: 1
Jul  1 11:36:02.880: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:36:03.858: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:36:03.861: INFO: Number of nodes with available pods: 1
Jul  1 11:36:03.861: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:36:05.283: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:36:05.286: INFO: Number of nodes with available pods: 1
Jul  1 11:36:05.286: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:36:05.859: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:36:05.862: INFO: Number of nodes with available pods: 1
Jul  1 11:36:05.862: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:36:06.983: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:36:07.515: INFO: Number of nodes with available pods: 1
Jul  1 11:36:07.515: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:36:07.872: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:36:07.982: INFO: Number of nodes with available pods: 1
Jul  1 11:36:07.982: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:36:08.859: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:36:08.863: INFO: Number of nodes with available pods: 1
Jul  1 11:36:08.863: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:36:09.942: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:36:09.945: INFO: Number of nodes with available pods: 1
Jul  1 11:36:09.945: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:36:10.935: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:36:10.939: INFO: Number of nodes with available pods: 1
Jul  1 11:36:10.939: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:36:11.863: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:36:11.867: INFO: Number of nodes with available pods: 1
Jul  1 11:36:11.867: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:36:13.033: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:36:13.066: INFO: Number of nodes with available pods: 1
Jul  1 11:36:13.066: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:36:13.858: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:36:13.861: INFO: Number of nodes with available pods: 1
Jul  1 11:36:13.861: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:36:14.858: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:36:14.861: INFO: Number of nodes with available pods: 1
Jul  1 11:36:14.861: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:36:15.922: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:36:15.926: INFO: Number of nodes with available pods: 1
Jul  1 11:36:15.926: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:36:16.859: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:36:16.862: INFO: Number of nodes with available pods: 1
Jul  1 11:36:16.862: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:36:17.857: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:36:17.860: INFO: Number of nodes with available pods: 1
Jul  1 11:36:17.860: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:36:18.858: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:36:18.861: INFO: Number of nodes with available pods: 2
Jul  1 11:36:18.861: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6180, will wait for the garbage collector to delete the pods
Jul  1 11:36:18.922: INFO: Deleting DaemonSet.extensions daemon-set took: 6.124904ms
Jul  1 11:36:19.223: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.219971ms
Jul  1 11:36:33.825: INFO: Number of nodes with available pods: 0
Jul  1 11:36:33.825: INFO: Number of running nodes: 0, number of available pods: 0
Jul  1 11:36:33.827: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6180/daemonsets","resourceVersion":"16796138"},"items":null}

Jul  1 11:36:33.829: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6180/pods","resourceVersion":"16796138"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:36:33.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6180" for this suite.

• [SLOW TEST:49.164 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":138,"skipped":2418,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:36:33.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 11:36:33.966: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jul  1 11:36:39.047: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul  1 11:36:47.052: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jul  1 11:36:49.054: INFO: Creating deployment "test-rollover-deployment"
Jul  1 11:36:49.155: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jul  1 11:36:51.160: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jul  1 11:36:51.164: INFO: Ensure that both replica sets have 1 created replica
Jul  1 11:36:51.168: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jul  1 11:36:51.173: INFO: Updating deployment test-rollover-deployment
Jul  1 11:36:51.173: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jul  1 11:36:53.324: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jul  1 11:36:54.334: INFO: Make sure deployment "test-rollover-deployment" is complete
Jul  1 11:36:54.467: INFO: all replica sets need to contain the pod-template-hash label
Jul  1 11:36:54.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200211, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:36:56.473: INFO: all replica sets need to contain the pod-template-hash label
Jul  1 11:36:56.473: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200211, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:36:58.474: INFO: all replica sets need to contain the pod-template-hash label
Jul  1 11:36:58.474: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200211, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:37:01.655: INFO: all replica sets need to contain the pod-template-hash label
Jul  1 11:37:01.655: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200211, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:37:03.174: INFO: all replica sets need to contain the pod-template-hash label
Jul  1 11:37:03.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200211, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:37:05.247: INFO: all replica sets need to contain the pod-template-hash label
Jul  1 11:37:05.247: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200211, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:37:06.599: INFO: all replica sets need to contain the pod-template-hash label
Jul  1 11:37:06.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200211, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:37:08.472: INFO: all replica sets need to contain the pod-template-hash label
Jul  1 11:37:08.472: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200211, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:37:10.476: INFO: all replica sets need to contain the pod-template-hash label
Jul  1 11:37:10.476: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200229, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:37:12.473: INFO: all replica sets need to contain the pod-template-hash label
Jul  1 11:37:12.473: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200229, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:37:14.474: INFO: all replica sets need to contain the pod-template-hash label
Jul  1 11:37:14.474: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200229, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:37:16.474: INFO: all replica sets need to contain the pod-template-hash label
Jul  1 11:37:16.474: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200229, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:37:18.474: INFO: all replica sets need to contain the pod-template-hash label
Jul  1 11:37:18.474: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200229, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729200209, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:37:20.503: INFO: 
Jul  1 11:37:20.503: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jul  1 11:37:20.646: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-2724 /apis/apps/v1/namespaces/deployment-2724/deployments/test-rollover-deployment 8e466326-7c11-46d1-a09f-e3bc9640c1f3 16796361 2 2020-07-01 11:36:49 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-07-01 11:36:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-01 11:37:19 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004966698  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-01 11:36:49 +0000 UTC,LastTransitionTime:2020-07-01 11:36:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-07-01 11:37:19 +0000 UTC,LastTransitionTime:2020-07-01 11:36:49 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jul  1 11:37:20.648: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b  deployment-2724 /apis/apps/v1/namespaces/deployment-2724/replicasets/test-rollover-deployment-84f7f6f64b 6f483964-891c-43e0-9f1b-cafd64b9586f 16796350 2 2020-07-01 11:36:51 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 8e466326-7c11-46d1-a09f-e3bc9640c1f3 0xc004966ce7 0xc004966ce8}] []  [{kube-controller-manager Update apps/v1 2020-07-01 11:37:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 101 52 54 54 51 50 54 45 55 99 49 49 45 52 54 100 49 45 97 48 57 102 45 101 51 98 99 57 54 52 48 99 49 102 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004966d78  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jul  1 11:37:20.648: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jul  1 11:37:20.648: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-2724 /apis/apps/v1/namespaces/deployment-2724/replicasets/test-rollover-controller dc1f510a-0530-4785-abda-8a1384e0a66d 16796360 2 2020-07-01 11:36:33 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 8e466326-7c11-46d1-a09f-e3bc9640c1f3 0xc004966aaf 0xc004966ac0}] []  [{e2e.test Update apps/v1 2020-07-01 11:36:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-01 11:37:19 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 101 52 54 54 51 50 54 45 55 99 49 49 45 52 54 100 49 45 97 48 57 102 45 101 51 98 99 57 54 52 48 99 49 102 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004966b58  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul  1 11:37:20.649: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5  deployment-2724 /apis/apps/v1/namespaces/deployment-2724/replicasets/test-rollover-deployment-5686c4cfd5 0d0287fd-3f7b-4bdc-ae36-34d2dd17cd45 16796265 2 2020-07-01 11:36:49 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 8e466326-7c11-46d1-a09f-e3bc9640c1f3 0xc004966bc7 0xc004966bc8}] []  [{kube-controller-manager Update apps/v1 2020-07-01 11:36:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 101 52 54 54 51 50 54 45 55 99 49 49 45 52 54 100 49 45 97 48 57 102 45 101 51 98 99 57 54 52 48 99 49 102 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004966c68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul  1 11:37:20.651: INFO: Pod "test-rollover-deployment-84f7f6f64b-mw25x" is available:
&Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-mw25x test-rollover-deployment-84f7f6f64b- deployment-2724 /api/v1/namespaces/deployment-2724/pods/test-rollover-deployment-84f7f6f64b-mw25x e8de345d-b648-40d3-b41c-083ee7421e51 16796318 0 2020-07-01 11:36:51 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b 6f483964-891c-43e0-9f1b-cafd64b9586f 0xc004967357 0xc004967358}] []  [{kube-controller-manager Update v1 2020-07-01 11:36:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 102 52 56 51 57 54 52 45 56 57 49 99 45 52 51 101 48 45 57 102 49 98 45 99 97 102 100 54 52 98 57 53 56 54 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 11:37:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 55 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhfsr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhfsr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhfsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 11:36:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 11:37:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 11:37:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 11:36:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.170,StartTime:2020-07-01 11:36:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 11:37:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://129641985bb4955b3170738f56ee30c31f29297d6edbc151fff11e16952a5a9b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.170,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:37:20.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2724" for this suite.

• [SLOW TEST:46.813 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":139,"skipped":2431,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:37:20.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul  1 11:37:22.276: INFO: Waiting up to 5m0s for pod "pod-c4390b96-76c3-449a-95e5-c969db111a09" in namespace "emptydir-2955" to be "Succeeded or Failed"
Jul  1 11:37:23.095: INFO: Pod "pod-c4390b96-76c3-449a-95e5-c969db111a09": Phase="Pending", Reason="", readiness=false. Elapsed: 819.483946ms
Jul  1 11:37:27.033: INFO: Pod "pod-c4390b96-76c3-449a-95e5-c969db111a09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.757140546s
Jul  1 11:37:29.756: INFO: Pod "pod-c4390b96-76c3-449a-95e5-c969db111a09": Phase="Pending", Reason="", readiness=false. Elapsed: 7.480285465s
Jul  1 11:37:32.383: INFO: Pod "pod-c4390b96-76c3-449a-95e5-c969db111a09": Phase="Pending", Reason="", readiness=false. Elapsed: 10.107343924s
Jul  1 11:37:36.145: INFO: Pod "pod-c4390b96-76c3-449a-95e5-c969db111a09": Phase="Running", Reason="", readiness=true. Elapsed: 13.86916704s
Jul  1 11:37:38.148: INFO: Pod "pod-c4390b96-76c3-449a-95e5-c969db111a09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.872674384s
STEP: Saw pod success
Jul  1 11:37:38.148: INFO: Pod "pod-c4390b96-76c3-449a-95e5-c969db111a09" satisfied condition "Succeeded or Failed"
Jul  1 11:37:38.151: INFO: Trying to get logs from node kali-worker2 pod pod-c4390b96-76c3-449a-95e5-c969db111a09 container test-container: 
STEP: delete the pod
Jul  1 11:37:38.198: INFO: Waiting for pod pod-c4390b96-76c3-449a-95e5-c969db111a09 to disappear
Jul  1 11:37:38.208: INFO: Pod pod-c4390b96-76c3-449a-95e5-c969db111a09 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:37:38.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2955" for this suite.

• [SLOW TEST:17.589 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2433,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:37:38.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 11:37:38.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul  1 11:37:40.259: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9942 create -f -'
Jul  1 11:37:44.191: INFO: stderr: ""
Jul  1 11:37:44.191: INFO: stdout: "e2e-test-crd-publish-openapi-2906-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jul  1 11:37:44.191: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9942 delete e2e-test-crd-publish-openapi-2906-crds test-cr'
Jul  1 11:37:44.317: INFO: stderr: ""
Jul  1 11:37:44.317: INFO: stdout: "e2e-test-crd-publish-openapi-2906-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Jul  1 11:37:44.317: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9942 apply -f -'
Jul  1 11:37:44.618: INFO: stderr: ""
Jul  1 11:37:44.618: INFO: stdout: "e2e-test-crd-publish-openapi-2906-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jul  1 11:37:44.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9942 delete e2e-test-crd-publish-openapi-2906-crds test-cr'
Jul  1 11:37:44.825: INFO: stderr: ""
Jul  1 11:37:44.825: INFO: stdout: "e2e-test-crd-publish-openapi-2906-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Jul  1 11:37:44.825: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2906-crds'
Jul  1 11:37:45.113: INFO: stderr: ""
Jul  1 11:37:45.113: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2906-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:37:48.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9942" for this suite.

• [SLOW TEST:10.248 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":141,"skipped":2436,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:37:48.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 11:38:04.705: INFO: Waiting up to 5m0s for pod "client-envvars-8d164486-08fe-4472-b7f3-e96be17f1105" in namespace "pods-573" to be "Succeeded or Failed"
Jul  1 11:38:04.720: INFO: Pod "client-envvars-8d164486-08fe-4472-b7f3-e96be17f1105": Phase="Pending", Reason="", readiness=false. Elapsed: 14.711325ms
Jul  1 11:38:06.723: INFO: Pod "client-envvars-8d164486-08fe-4472-b7f3-e96be17f1105": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018050669s
Jul  1 11:38:08.855: INFO: Pod "client-envvars-8d164486-08fe-4472-b7f3-e96be17f1105": Phase="Running", Reason="", readiness=true. Elapsed: 4.149785172s
Jul  1 11:38:10.858: INFO: Pod "client-envvars-8d164486-08fe-4472-b7f3-e96be17f1105": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.15336183s
STEP: Saw pod success
Jul  1 11:38:10.858: INFO: Pod "client-envvars-8d164486-08fe-4472-b7f3-e96be17f1105" satisfied condition "Succeeded or Failed"
Jul  1 11:38:10.861: INFO: Trying to get logs from node kali-worker2 pod client-envvars-8d164486-08fe-4472-b7f3-e96be17f1105 container env3cont: 
STEP: delete the pod
Jul  1 11:38:10.892: INFO: Waiting for pod client-envvars-8d164486-08fe-4472-b7f3-e96be17f1105 to disappear
Jul  1 11:38:10.915: INFO: Pod client-envvars-8d164486-08fe-4472-b7f3-e96be17f1105 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:38:10.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-573" for this suite.

• [SLOW TEST:23.400 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2443,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:38:11.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6127.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6127.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  1 11:38:28.883: INFO: DNS probes using dns-6127/dns-test-513a3319-a0e2-481d-aa2d-f62e74db0040 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:38:28.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6127" for this suite.

• [SLOW TEST:17.049 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":275,"completed":143,"skipped":2469,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:38:28.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:39:17.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4465" for this suite.
STEP: Destroying namespace "nsdeletetest-3355" for this suite.
Jul  1 11:39:18.162: INFO: Namespace nsdeletetest-3355 was already deleted
STEP: Destroying namespace "nsdeletetest-4705" for this suite.

• [SLOW TEST:49.284 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":144,"skipped":2473,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:39:18.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul  1 11:39:19.603: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ebcc9b37-783e-42b5-b0e1-a2aeaacf87a8" in namespace "projected-6210" to be "Succeeded or Failed"
Jul  1 11:39:19.996: INFO: Pod "downwardapi-volume-ebcc9b37-783e-42b5-b0e1-a2aeaacf87a8": Phase="Pending", Reason="", readiness=false. Elapsed: 392.724826ms
Jul  1 11:39:22.529: INFO: Pod "downwardapi-volume-ebcc9b37-783e-42b5-b0e1-a2aeaacf87a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.926210571s
Jul  1 11:39:25.361: INFO: Pod "downwardapi-volume-ebcc9b37-783e-42b5-b0e1-a2aeaacf87a8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.75794193s
Jul  1 11:39:27.564: INFO: Pod "downwardapi-volume-ebcc9b37-783e-42b5-b0e1-a2aeaacf87a8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.960826129s
Jul  1 11:39:29.568: INFO: Pod "downwardapi-volume-ebcc9b37-783e-42b5-b0e1-a2aeaacf87a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.965194109s
STEP: Saw pod success
Jul  1 11:39:29.568: INFO: Pod "downwardapi-volume-ebcc9b37-783e-42b5-b0e1-a2aeaacf87a8" satisfied condition "Succeeded or Failed"
Jul  1 11:39:29.571: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-ebcc9b37-783e-42b5-b0e1-a2aeaacf87a8 container client-container: 
STEP: delete the pod
Jul  1 11:39:29.697: INFO: Waiting for pod downwardapi-volume-ebcc9b37-783e-42b5-b0e1-a2aeaacf87a8 to disappear
Jul  1 11:39:29.724: INFO: Pod downwardapi-volume-ebcc9b37-783e-42b5-b0e1-a2aeaacf87a8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:39:29.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6210" for this suite.

• [SLOW TEST:11.501 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2476,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:39:29.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Jul  1 11:39:29.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:39:47.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6756" for this suite.

• [SLOW TEST:18.218 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":146,"skipped":2516,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:39:47.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul  1 11:39:50.942: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f1025935-8c6c-49b2-b6e9-9d8359da6a13" in namespace "downward-api-2932" to be "Succeeded or Failed"
Jul  1 11:39:51.553: INFO: Pod "downwardapi-volume-f1025935-8c6c-49b2-b6e9-9d8359da6a13": Phase="Pending", Reason="", readiness=false. Elapsed: 611.381659ms
Jul  1 11:39:54.343: INFO: Pod "downwardapi-volume-f1025935-8c6c-49b2-b6e9-9d8359da6a13": Phase="Pending", Reason="", readiness=false. Elapsed: 3.40122028s
Jul  1 11:39:57.249: INFO: Pod "downwardapi-volume-f1025935-8c6c-49b2-b6e9-9d8359da6a13": Phase="Pending", Reason="", readiness=false. Elapsed: 6.307226358s
Jul  1 11:39:59.349: INFO: Pod "downwardapi-volume-f1025935-8c6c-49b2-b6e9-9d8359da6a13": Phase="Pending", Reason="", readiness=false. Elapsed: 8.406934461s
Jul  1 11:40:01.352: INFO: Pod "downwardapi-volume-f1025935-8c6c-49b2-b6e9-9d8359da6a13": Phase="Pending", Reason="", readiness=false. Elapsed: 10.410443721s
Jul  1 11:40:03.357: INFO: Pod "downwardapi-volume-f1025935-8c6c-49b2-b6e9-9d8359da6a13": Phase="Pending", Reason="", readiness=false. Elapsed: 12.415096589s
Jul  1 11:40:06.835: INFO: Pod "downwardapi-volume-f1025935-8c6c-49b2-b6e9-9d8359da6a13": Phase="Pending", Reason="", readiness=false. Elapsed: 15.893020035s
Jul  1 11:40:08.838: INFO: Pod "downwardapi-volume-f1025935-8c6c-49b2-b6e9-9d8359da6a13": Phase="Pending", Reason="", readiness=false. Elapsed: 17.896569773s
Jul  1 11:40:10.842: INFO: Pod "downwardapi-volume-f1025935-8c6c-49b2-b6e9-9d8359da6a13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.899931402s
STEP: Saw pod success
Jul  1 11:40:10.842: INFO: Pod "downwardapi-volume-f1025935-8c6c-49b2-b6e9-9d8359da6a13" satisfied condition "Succeeded or Failed"
Jul  1 11:40:10.844: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-f1025935-8c6c-49b2-b6e9-9d8359da6a13 container client-container: 
STEP: delete the pod
Jul  1 11:40:11.552: INFO: Waiting for pod downwardapi-volume-f1025935-8c6c-49b2-b6e9-9d8359da6a13 to disappear
Jul  1 11:40:11.573: INFO: Pod downwardapi-volume-f1025935-8c6c-49b2-b6e9-9d8359da6a13 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:40:11.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2932" for this suite.

• [SLOW TEST:23.631 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":147,"skipped":2528,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:40:11.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:40:18.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4307" for this suite.

• [SLOW TEST:6.511 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":148,"skipped":2548,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:40:18.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:40:31.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6659" for this suite.

• [SLOW TEST:13.379 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":149,"skipped":2553,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:40:31.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating secret secrets-9306/secret-test-c09f377b-cb5e-4e0b-9aad-ebd8b3ff0821
STEP: Creating a pod to test consume secrets
Jul  1 11:40:31.572: INFO: Waiting up to 5m0s for pod "pod-configmaps-fb85b4c8-10df-44d7-8c60-4e6519c3dca9" in namespace "secrets-9306" to be "Succeeded or Failed"
Jul  1 11:40:31.576: INFO: Pod "pod-configmaps-fb85b4c8-10df-44d7-8c60-4e6519c3dca9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125154ms
Jul  1 11:40:33.783: INFO: Pod "pod-configmaps-fb85b4c8-10df-44d7-8c60-4e6519c3dca9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210922717s
Jul  1 11:40:35.786: INFO: Pod "pod-configmaps-fb85b4c8-10df-44d7-8c60-4e6519c3dca9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213553202s
Jul  1 11:40:38.207: INFO: Pod "pod-configmaps-fb85b4c8-10df-44d7-8c60-4e6519c3dca9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.635196413s
Jul  1 11:40:40.211: INFO: Pod "pod-configmaps-fb85b4c8-10df-44d7-8c60-4e6519c3dca9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.638564811s
Jul  1 11:40:43.025: INFO: Pod "pod-configmaps-fb85b4c8-10df-44d7-8c60-4e6519c3dca9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.452572852s
Jul  1 11:40:45.027: INFO: Pod "pod-configmaps-fb85b4c8-10df-44d7-8c60-4e6519c3dca9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.455315467s
Jul  1 11:40:47.870: INFO: Pod "pod-configmaps-fb85b4c8-10df-44d7-8c60-4e6519c3dca9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.297888936s
Jul  1 11:40:49.873: INFO: Pod "pod-configmaps-fb85b4c8-10df-44d7-8c60-4e6519c3dca9": Phase="Running", Reason="", readiness=true. Elapsed: 18.300588252s
Jul  1 11:40:51.876: INFO: Pod "pod-configmaps-fb85b4c8-10df-44d7-8c60-4e6519c3dca9": Phase="Running", Reason="", readiness=true. Elapsed: 20.303921183s
Jul  1 11:40:53.879: INFO: Pod "pod-configmaps-fb85b4c8-10df-44d7-8c60-4e6519c3dca9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.307158995s
STEP: Saw pod success
Jul  1 11:40:53.879: INFO: Pod "pod-configmaps-fb85b4c8-10df-44d7-8c60-4e6519c3dca9" satisfied condition "Succeeded or Failed"
Jul  1 11:40:53.881: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-fb85b4c8-10df-44d7-8c60-4e6519c3dca9 container env-test: 
STEP: delete the pod
Jul  1 11:40:55.051: INFO: Waiting for pod pod-configmaps-fb85b4c8-10df-44d7-8c60-4e6519c3dca9 to disappear
Jul  1 11:40:55.309: INFO: Pod pod-configmaps-fb85b4c8-10df-44d7-8c60-4e6519c3dca9 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:40:55.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9306" for this suite.

• [SLOW TEST:24.059 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2579,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:40:55.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 11:40:56.764: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jul  1 11:40:57.031: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:40:57.163: INFO: Number of nodes with available pods: 0
Jul  1 11:40:57.163: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:40:58.413: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:40:58.845: INFO: Number of nodes with available pods: 0
Jul  1 11:40:58.845: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:40:59.248: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:40:59.303: INFO: Number of nodes with available pods: 0
Jul  1 11:40:59.303: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:41:00.167: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:00.170: INFO: Number of nodes with available pods: 0
Jul  1 11:41:00.170: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:41:01.937: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:02.812: INFO: Number of nodes with available pods: 0
Jul  1 11:41:02.812: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:41:03.823: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:03.826: INFO: Number of nodes with available pods: 0
Jul  1 11:41:03.826: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:41:04.207: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:04.209: INFO: Number of nodes with available pods: 0
Jul  1 11:41:04.209: INFO: Node kali-worker is running more than one daemon pod
Jul  1 11:41:05.182: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:05.185: INFO: Number of nodes with available pods: 1
Jul  1 11:41:05.185: INFO: Node kali-worker2 is running more than one daemon pod
Jul  1 11:41:06.167: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:06.170: INFO: Number of nodes with available pods: 2
Jul  1 11:41:06.170: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jul  1 11:41:06.227: INFO: Wrong image for pod: daemon-set-dzxqg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:06.227: INFO: Wrong image for pod: daemon-set-mm8s2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:06.308: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:07.314: INFO: Wrong image for pod: daemon-set-dzxqg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:07.314: INFO: Wrong image for pod: daemon-set-mm8s2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:07.317: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:08.311: INFO: Wrong image for pod: daemon-set-dzxqg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:08.311: INFO: Wrong image for pod: daemon-set-mm8s2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:08.313: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:09.390: INFO: Wrong image for pod: daemon-set-dzxqg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:09.390: INFO: Wrong image for pod: daemon-set-mm8s2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:09.390: INFO: Pod daemon-set-mm8s2 is not available
Jul  1 11:41:09.392: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:10.311: INFO: Wrong image for pod: daemon-set-dzxqg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:10.311: INFO: Pod daemon-set-rdcnp is not available
Jul  1 11:41:10.313: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:11.312: INFO: Wrong image for pod: daemon-set-dzxqg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:11.312: INFO: Pod daemon-set-rdcnp is not available
Jul  1 11:41:11.316: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:12.409: INFO: Wrong image for pod: daemon-set-dzxqg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:12.409: INFO: Pod daemon-set-rdcnp is not available
Jul  1 11:41:12.413: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:13.451: INFO: Wrong image for pod: daemon-set-dzxqg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:13.453: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:14.314: INFO: Wrong image for pod: daemon-set-dzxqg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:14.314: INFO: Pod daemon-set-dzxqg is not available
Jul  1 11:41:14.318: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:15.320: INFO: Wrong image for pod: daemon-set-dzxqg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:15.320: INFO: Pod daemon-set-dzxqg is not available
Jul  1 11:41:15.323: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:16.312: INFO: Wrong image for pod: daemon-set-dzxqg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:16.312: INFO: Pod daemon-set-dzxqg is not available
Jul  1 11:41:16.315: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:17.312: INFO: Wrong image for pod: daemon-set-dzxqg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:17.312: INFO: Pod daemon-set-dzxqg is not available
Jul  1 11:41:17.315: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:18.312: INFO: Wrong image for pod: daemon-set-dzxqg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:18.312: INFO: Pod daemon-set-dzxqg is not available
Jul  1 11:41:18.315: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:19.422: INFO: Wrong image for pod: daemon-set-dzxqg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:19.422: INFO: Pod daemon-set-dzxqg is not available
Jul  1 11:41:19.426: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:21.707: INFO: Wrong image for pod: daemon-set-dzxqg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:21.707: INFO: Pod daemon-set-dzxqg is not available
Jul  1 11:41:21.712: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:22.312: INFO: Wrong image for pod: daemon-set-dzxqg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:22.312: INFO: Pod daemon-set-dzxqg is not available
Jul  1 11:41:22.318: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:23.967: INFO: Wrong image for pod: daemon-set-dzxqg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:23.968: INFO: Pod daemon-set-dzxqg is not available
Jul  1 11:41:24.136: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:25.562: INFO: Wrong image for pod: daemon-set-dzxqg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul  1 11:41:25.562: INFO: Pod daemon-set-dzxqg is not available
Jul  1 11:41:25.568: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:27.128: INFO: Pod daemon-set-fr8wj is not available
Jul  1 11:41:27.381: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Jul  1 11:41:27.542: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:27.680: INFO: Number of nodes with available pods: 1
Jul  1 11:41:27.680: INFO: Node kali-worker2 is running more than one daemon pod
Jul  1 11:41:28.684: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:28.688: INFO: Number of nodes with available pods: 1
Jul  1 11:41:28.688: INFO: Node kali-worker2 is running more than one daemon pod
Jul  1 11:41:29.962: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:29.966: INFO: Number of nodes with available pods: 1
Jul  1 11:41:29.966: INFO: Node kali-worker2 is running more than one daemon pod
Jul  1 11:41:30.684: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:30.688: INFO: Number of nodes with available pods: 1
Jul  1 11:41:30.688: INFO: Node kali-worker2 is running more than one daemon pod
Jul  1 11:41:31.694: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:31.741: INFO: Number of nodes with available pods: 1
Jul  1 11:41:31.741: INFO: Node kali-worker2 is running more than one daemon pod
Jul  1 11:41:32.685: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:32.689: INFO: Number of nodes with available pods: 1
Jul  1 11:41:32.689: INFO: Node kali-worker2 is running more than one daemon pod
Jul  1 11:41:33.685: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:33.688: INFO: Number of nodes with available pods: 1
Jul  1 11:41:33.688: INFO: Node kali-worker2 is running more than one daemon pod
Jul  1 11:41:34.684: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  1 11:41:34.686: INFO: Number of nodes with available pods: 2
Jul  1 11:41:34.686: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4917, will wait for the garbage collector to delete the pods
Jul  1 11:41:34.754: INFO: Deleting DaemonSet.extensions daemon-set took: 4.875551ms
Jul  1 11:41:36.154: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.400280968s
Jul  1 11:41:53.793: INFO: Number of nodes with available pods: 0
Jul  1 11:41:53.793: INFO: Number of running nodes: 0, number of available pods: 0
Jul  1 11:41:53.795: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4917/daemonsets","resourceVersion":"16797436"},"items":null}

Jul  1 11:41:53.798: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4917/pods","resourceVersion":"16797436"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:41:54.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4917" for this suite.

• [SLOW TEST:58.850 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":151,"skipped":2593,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:41:54.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Jul  1 11:41:54.670: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  1 11:41:54.712: INFO: Waiting for terminating namespaces to be deleted...
Jul  1 11:41:54.714: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Jul  1 11:41:54.730: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jul  1 11:41:54.730: INFO: 	Container kindnet-cni ready: true, restart count 7
Jul  1 11:41:54.730: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jul  1 11:41:54.730: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  1 11:41:54.730: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Jul  1 11:41:54.735: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jul  1 11:41:54.735: INFO: 	Container kindnet-cni ready: true, restart count 5
Jul  1 11:41:54.735: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jul  1 11:41:54.735: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.161d9e22abf26bce], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:41:58.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2181" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:5.797 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":275,"completed":152,"skipped":2605,"failed":0}
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:42:00.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0701 11:42:23.988665       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  1 11:42:23.988: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:42:23.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-116" for this suite.

• [SLOW TEST:24.430 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":153,"skipped":2606,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:42:24.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul  1 11:42:27.308: INFO: Waiting up to 5m0s for pod "pod-378450a6-475e-4e4d-8f93-93393e5fcf46" in namespace "emptydir-9303" to be "Succeeded or Failed"
Jul  1 11:42:27.423: INFO: Pod "pod-378450a6-475e-4e4d-8f93-93393e5fcf46": Phase="Pending", Reason="", readiness=false. Elapsed: 114.667075ms
Jul  1 11:42:29.511: INFO: Pod "pod-378450a6-475e-4e4d-8f93-93393e5fcf46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203307228s
Jul  1 11:42:32.489: INFO: Pod "pod-378450a6-475e-4e4d-8f93-93393e5fcf46": Phase="Pending", Reason="", readiness=false. Elapsed: 5.18048616s
Jul  1 11:42:35.632: INFO: Pod "pod-378450a6-475e-4e4d-8f93-93393e5fcf46": Phase="Pending", Reason="", readiness=false. Elapsed: 8.324350825s
Jul  1 11:42:37.874: INFO: Pod "pod-378450a6-475e-4e4d-8f93-93393e5fcf46": Phase="Running", Reason="", readiness=true. Elapsed: 10.566398849s
Jul  1 11:42:39.878: INFO: Pod "pod-378450a6-475e-4e4d-8f93-93393e5fcf46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.570129745s
STEP: Saw pod success
Jul  1 11:42:39.878: INFO: Pod "pod-378450a6-475e-4e4d-8f93-93393e5fcf46" satisfied condition "Succeeded or Failed"
Jul  1 11:42:39.880: INFO: Trying to get logs from node kali-worker2 pod pod-378450a6-475e-4e4d-8f93-93393e5fcf46 container test-container: 
STEP: delete the pod
Jul  1 11:42:40.049: INFO: Waiting for pod pod-378450a6-475e-4e4d-8f93-93393e5fcf46 to disappear
Jul  1 11:42:40.182: INFO: Pod pod-378450a6-475e-4e4d-8f93-93393e5fcf46 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:42:40.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9303" for this suite.

• [SLOW TEST:15.596 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2625,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:42:40.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 11:42:40.506: INFO: (0) /api/v1/nodes/kali-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 11:42:40.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Jul  1 11:42:44.384: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4748 create -f -'
Jul  1 11:43:07.228: INFO: stderr: ""
Jul  1 11:43:07.228: INFO: stdout: "e2e-test-crd-publish-openapi-8763-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jul  1 11:43:07.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4748 delete e2e-test-crd-publish-openapi-8763-crds test-foo'
Jul  1 11:43:07.344: INFO: stderr: ""
Jul  1 11:43:07.344: INFO: stdout: "e2e-test-crd-publish-openapi-8763-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Jul  1 11:43:07.344: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4748 apply -f -'
Jul  1 11:43:07.655: INFO: stderr: ""
Jul  1 11:43:07.655: INFO: stdout: "e2e-test-crd-publish-openapi-8763-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jul  1 11:43:07.655: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4748 delete e2e-test-crd-publish-openapi-8763-crds test-foo'
Jul  1 11:43:07.762: INFO: stderr: ""
Jul  1 11:43:07.762: INFO: stdout: "e2e-test-crd-publish-openapi-8763-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Jul  1 11:43:07.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4748 create -f -'
Jul  1 11:43:08.380: INFO: rc: 1
Jul  1 11:43:08.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4748 apply -f -'
Jul  1 11:43:08.662: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Jul  1 11:43:08.662: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4748 create -f -'
Jul  1 11:43:08.953: INFO: rc: 1
Jul  1 11:43:08.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4748 apply -f -'
Jul  1 11:43:09.227: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Jul  1 11:43:09.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8763-crds'
Jul  1 11:43:09.495: INFO: stderr: ""
Jul  1 11:43:09.495: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8763-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Jul  1 11:43:09.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8763-crds.metadata'
Jul  1 11:43:09.826: INFO: stderr: ""
Jul  1 11:43:09.826: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8763-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Jul  1 11:43:09.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8763-crds.spec'
Jul  1 11:43:10.102: INFO: stderr: ""
Jul  1 11:43:10.102: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8763-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jul  1 11:43:10.102: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8763-crds.spec.bars'
Jul  1 11:43:10.385: INFO: stderr: ""
Jul  1 11:43:10.385: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8763-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jul  1 11:43:10.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8763-crds.spec.bars2'
Jul  1 11:43:10.706: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:43:13.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4748" for this suite.

• [SLOW TEST:33.110 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":156,"skipped":2725,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:43:13.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul  1 11:43:13.810: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5f3b7ed7-bc1c-4e98-b494-4af75efe4ca6" in namespace "downward-api-1148" to be "Succeeded or Failed"
Jul  1 11:43:13.831: INFO: Pod "downwardapi-volume-5f3b7ed7-bc1c-4e98-b494-4af75efe4ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.421088ms
Jul  1 11:43:15.834: INFO: Pod "downwardapi-volume-5f3b7ed7-bc1c-4e98-b494-4af75efe4ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023893175s
Jul  1 11:43:18.427: INFO: Pod "downwardapi-volume-5f3b7ed7-bc1c-4e98-b494-4af75efe4ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.617226335s
Jul  1 11:43:20.430: INFO: Pod "downwardapi-volume-5f3b7ed7-bc1c-4e98-b494-4af75efe4ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.62004317s
Jul  1 11:43:22.524: INFO: Pod "downwardapi-volume-5f3b7ed7-bc1c-4e98-b494-4af75efe4ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.714312445s
Jul  1 11:43:24.530: INFO: Pod "downwardapi-volume-5f3b7ed7-bc1c-4e98-b494-4af75efe4ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.719618839s
Jul  1 11:43:26.622: INFO: Pod "downwardapi-volume-5f3b7ed7-bc1c-4e98-b494-4af75efe4ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.811517129s
Jul  1 11:43:28.769: INFO: Pod "downwardapi-volume-5f3b7ed7-bc1c-4e98-b494-4af75efe4ca6": Phase="Running", Reason="", readiness=true. Elapsed: 14.95850992s
Jul  1 11:43:30.772: INFO: Pod "downwardapi-volume-5f3b7ed7-bc1c-4e98-b494-4af75efe4ca6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.961609215s
STEP: Saw pod success
Jul  1 11:43:30.772: INFO: Pod "downwardapi-volume-5f3b7ed7-bc1c-4e98-b494-4af75efe4ca6" satisfied condition "Succeeded or Failed"
Jul  1 11:43:30.774: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-5f3b7ed7-bc1c-4e98-b494-4af75efe4ca6 container client-container: 
STEP: delete the pod
Jul  1 11:43:31.286: INFO: Waiting for pod downwardapi-volume-5f3b7ed7-bc1c-4e98-b494-4af75efe4ca6 to disappear
Jul  1 11:43:31.359: INFO: Pod downwardapi-volume-5f3b7ed7-bc1c-4e98-b494-4af75efe4ca6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:43:31.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1148" for this suite.

• [SLOW TEST:17.863 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2729,"failed":0}
SS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:43:31.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
Jul  1 11:43:32.712: INFO: created pod pod-service-account-defaultsa
Jul  1 11:43:32.712: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jul  1 11:43:32.743: INFO: created pod pod-service-account-mountsa
Jul  1 11:43:32.743: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jul  1 11:43:32.886: INFO: created pod pod-service-account-nomountsa
Jul  1 11:43:32.886: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jul  1 11:43:32.917: INFO: created pod pod-service-account-defaultsa-mountspec
Jul  1 11:43:32.917: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jul  1 11:43:33.091: INFO: created pod pod-service-account-mountsa-mountspec
Jul  1 11:43:33.091: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jul  1 11:43:33.203: INFO: created pod pod-service-account-nomountsa-mountspec
Jul  1 11:43:33.203: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jul  1 11:43:33.270: INFO: created pod pod-service-account-defaultsa-nomountspec
Jul  1 11:43:33.270: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jul  1 11:43:33.351: INFO: created pod pod-service-account-mountsa-nomountspec
Jul  1 11:43:33.351: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jul  1 11:43:33.395: INFO: created pod pod-service-account-nomountsa-nomountspec
Jul  1 11:43:33.395: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 11:43:33.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8385" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":275,"completed":158,"skipped":2731,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 11:43:33.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-8828
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Jul  1 11:43:33.981: INFO: Found 0 stateful pods, waiting for 3
Jul  1 11:43:45.252: INFO: Found 1 stateful pods, waiting for 3
Jul  1 11:43:54.159: INFO: Found 1 stateful pods, waiting for 3
Jul  1 11:44:04.094: INFO: Found 1 stateful pods, waiting for 3
Jul  1 11:44:17.319: INFO: Found 1 stateful pods, waiting for 3
Jul  1 11:44:25.025: INFO: Found 1 stateful pods, waiting for 3
Jul  1 11:44:35.109: INFO: Found 1 stateful pods, waiting for 3
Jul  1 11:44:44.028: INFO: Found 1 stateful pods, waiting for 3
Jul  1 11:44:56.872: INFO: Found 1 stateful pods, waiting for 3
Jul  1 11:45:04.469: INFO: Found 1 stateful pods, waiting for 3
Jul  1 11:45:17.472: INFO: Found 1 stateful pods, waiting for 3
Jul  1 11:45:23.986: INFO: Found 1 stateful pods, waiting for 3
Jul  1 11:45:33.986: INFO: Found 1 stateful pods, waiting for 3
Jul  1 11:45:45.556: INFO: Found 1 stateful pods, waiting for 3
Jul  1 11:45:54.308: INFO: Found 1 stateful pods, waiting for 3
Jul  1 11:46:03.984: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:46:15.550: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:46:15.550: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:46:15.550: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul  1 11:46:26.457: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:46:26.457: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:46:26.457: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul  1 11:46:35.784: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:46:35.784: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:46:35.784: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul  1 11:46:44.175: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:46:44.175: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:46:44.175: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Jul  1 11:46:55.677: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:46:55.677: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:46:55.677: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jul  1 11:46:56.148: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jul  1 11:47:07.615: INFO: Updating stateful set ss2
Jul  1 11:47:13.868: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:47:27.427: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:47:35.992: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:47:48.002: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:47:54.227: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:48:05.420: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:48:14.342: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:48:25.755: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:48:35.120: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:48:54.403: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:49:07.105: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:49:13.875: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:49:23.874: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:49:33.975: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:49:45.019: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:49:53.966: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:50:03.873: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:50:13.874: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:50:24.124: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:50:33.928: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:50:44.002: INFO: Waiting for Pod statefulset-8828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jul  1 11:51:51.344: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:52:01.577: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:52:11.391: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:52:21.511: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:52:33.446: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:52:41.368: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:52:51.446: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:53:01.380: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:53:12.824: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:53:21.907: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:53:32.290: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:53:42.764: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:53:51.978: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:54:01.371: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:54:12.686: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:54:22.315: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:54:31.351: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:54:41.550: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:54:51.652: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:55:02.485: INFO: Found 2 stateful pods, waiting for 3
Jul  1 11:55:12.043: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:55:12.043: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:55:12.043: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul  1 11:55:22.367: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:55:22.368: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:55:22.368: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jul  1 11:55:22.536: INFO: Updating stateful set ss2
Jul  1 11:55:23.731: INFO: Waiting for Pod statefulset-8828/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:55:33.965: INFO: Waiting for Pod statefulset-8828/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:55:55.692: INFO: Updating stateful set ss2
Jul  1 11:56:01.485: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:56:01.485: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:56:12.378: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:56:12.378: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:56:22.880: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:56:22.880: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:56:31.541: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:56:31.541: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:56:42.512: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:56:42.512: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:56:52.165: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:56:52.165: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:57:02.616: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:57:02.616: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:57:16.957: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:57:16.957: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:57:22.232: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:57:22.232: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:57:33.195: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:57:33.195: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:57:44.728: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:57:44.728: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:57:52.147: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:57:52.147: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:58:02.208: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:58:02.208: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:58:13.662: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:58:13.663: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:58:21.558: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:58:21.558: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:58:32.289: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:58:32.290: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:58:41.492: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:58:41.492: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:58:53.781: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:58:53.781: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:59:01.850: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:59:01.850: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:59:11.591: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:59:11.591: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:59:21.954: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:59:21.954: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:59:36.932: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:59:36.933: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:59:46.320: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:59:46.320: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 11:59:53.110: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 11:59:53.110: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 12:00:04.102: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:00:04.102: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 12:00:15.087: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:00:15.087: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 12:00:23.684: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:00:23.684: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 12:00:34.775: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:00:34.775: INFO: Waiting for Pod statefulset-8828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  1 12:02:34.208: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:02:42.557: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:02:52.950: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:03:01.790: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:03:13.424: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:03:22.395: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:03:38.638: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:03:42.476: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:03:51.532: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:04:01.571: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:04:11.492: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:04:21.491: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:04:33.731: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:04:45.790: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:04:51.493: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:05:01.491: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:05:24.712: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:05:32.201: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:05:42.632: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:05:53.340: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:06:01.491: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:06:01.494: INFO: Waiting for StatefulSet statefulset-8828/ss2 to complete update
Jul  1 12:06:01.494: FAIL: Failed waiting for state update: timed out waiting for the condition

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForState(0x534aca0, 0xc001898420, 0xc000630000, 0xc001a4f048)
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:74 +0x105
k8s.io/kubernetes/test/e2e/apps.waitForPartitionedRollingUpdate(0x534aca0, 0xc001898420, 0xc000726000, 0x0, 0x0)
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/wait.go:46 +0x1f5
k8s.io/kubernetes/test/e2e/apps.glob..func10.2.8()
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:458 +0x427d
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000af7b00)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324
k8s.io/kubernetes/test/e2e.TestE2E(0xc000af7b00)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b
testing.tRunner(0xc000af7b00, 0x4ae8810)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul  1 12:06:01.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe po ss2-0 --namespace=statefulset-8828'
Jul  1 12:06:06.591: INFO: stderr: ""
Jul  1 12:06:06.591: INFO: stdout: "Name:         ss2-0\nNamespace:    statefulset-8828\nPriority:     0\nNode:         kali-worker2/172.17.0.18\nStart Time:   Wed, 01 Jul 2020 12:02:22 +0000\nLabels:       baz=blah\n              controller-revision-hash=ss2-84f9d6bf57\n              foo=bar\n              statefulset.kubernetes.io/pod-name=ss2-0\nAnnotations:  \nStatus:       Running\nIP:           10.244.1.190\nIPs:\n  IP:           10.244.1.190\nControlled By:  StatefulSet/ss2\nContainers:\n  webserver:\n    Container ID:   containerd://01939ac9478b14844d7d7068c4dd665cbe4870cb849a0181c074a36bbad85111\n    Image:          docker.io/library/httpd:2.4.39-alpine\n    Image ID:       docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a\n    Port:           \n    Host Port:      \n    State:          Waiting\n      Reason:       CreateContainerError\n    Last State:     Terminated\n      Exit Code:    0\n      Started:      Mon, 01 Jan 0001 00:00:00 +0000\n      Finished:     Mon, 01 Jan 0001 00:00:00 +0000\n    Ready:          False\n    Restart Count:  0\n    Readiness:      http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zxhdh (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             False \n  ContainersReady   False \n  PodScheduled      True \nVolumes:\n  default-token-zxhdh:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-zxhdh\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason     Age                 From                   Message\n  ----     ------     ----                ----                   -------\n  Normal   Scheduled             default-scheduler      Successfully assigned statefulset-8828/ss2-0 to kali-worker2\n  Warning  Failed     103s                kubelet, kali-worker2  Error: context deadline exceeded\n  Warning  Failed     15s (x7 over 102s)  kubelet, kali-worker2  Error: failed to reserve container name \"webserver_ss2-0_statefulset-8828_4691d211-f328-4549-97bc-f4ab572ddb68_0\": name \"webserver_ss2-0_statefulset-8828_4691d211-f328-4549-97bc-f4ab572ddb68_0\" is reserved for \"01939ac9478b14844d7d7068c4dd665cbe4870cb849a0181c074a36bbad85111\"\n  Normal   Pulled     0s (x9 over 3m43s)  kubelet, kali-worker2  Container image \"docker.io/library/httpd:2.4.39-alpine\" already present on machine\n"
Jul  1 12:06:06.591: INFO: 
Output of kubectl describe ss2-0:
Name:         ss2-0
Namespace:    statefulset-8828
Priority:     0
Node:         kali-worker2/172.17.0.18
Start Time:   Wed, 01 Jul 2020 12:02:22 +0000
Labels:       baz=blah
              controller-revision-hash=ss2-84f9d6bf57
              foo=bar
              statefulset.kubernetes.io/pod-name=ss2-0
Annotations:  
Status:       Running
IP:           10.244.1.190
IPs:
  IP:           10.244.1.190
Controlled By:  StatefulSet/ss2
Containers:
  webserver:
    Container ID:   containerd://01939ac9478b14844d7d7068c4dd665cbe4870cb849a0181c074a36bbad85111
    Image:          docker.io/library/httpd:2.4.39-alpine
    Image ID:       docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a
    Port:           
    Host Port:      
    State:          Waiting
      Reason:       CreateContainerError
    Last State:     Terminated
      Exit Code:    0
      Started:      Mon, 01 Jan 0001 00:00:00 +0000
      Finished:     Mon, 01 Jan 0001 00:00:00 +0000
    Ready:          False
    Restart Count:  0
    Readiness:      http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zxhdh (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-zxhdh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-zxhdh
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                 From                   Message
  ----     ------     ----                ----                   -------
  Normal   Scheduled             default-scheduler      Successfully assigned statefulset-8828/ss2-0 to kali-worker2
  Warning  Failed     103s                kubelet, kali-worker2  Error: context deadline exceeded
  Warning  Failed     15s (x7 over 102s)  kubelet, kali-worker2  Error: failed to reserve container name "webserver_ss2-0_statefulset-8828_4691d211-f328-4549-97bc-f4ab572ddb68_0": name "webserver_ss2-0_statefulset-8828_4691d211-f328-4549-97bc-f4ab572ddb68_0" is reserved for "01939ac9478b14844d7d7068c4dd665cbe4870cb849a0181c074a36bbad85111"
  Normal   Pulled     0s (x9 over 3m43s)  kubelet, kali-worker2  Container image "docker.io/library/httpd:2.4.39-alpine" already present on machine

Jul  1 12:06:06.591: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs ss2-0 --namespace=statefulset-8828 --tail=100'
Jul  1 12:06:06.691: INFO: stderr: ""
Jul  1 12:06:06.691: INFO: stdout: "failed to try resolving symlinks in path \"/var/log/pods/statefulset-8828_ss2-0_4691d211-f328-4549-97bc-f4ab572ddb68/webserver/0.log\": lstat /var/log/pods/statefulset-8828_ss2-0_4691d211-f328-4549-97bc-f4ab572ddb68/webserver/0.log: no such file or directory"
Jul  1 12:06:06.691: INFO: 
Last 100 log lines of ss2-0:
failed to try resolving symlinks in path "/var/log/pods/statefulset-8828_ss2-0_4691d211-f328-4549-97bc-f4ab572ddb68/webserver/0.log": lstat /var/log/pods/statefulset-8828_ss2-0_4691d211-f328-4549-97bc-f4ab572ddb68/webserver/0.log: no such file or directory
Jul  1 12:06:06.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe po ss2-1 --namespace=statefulset-8828'
Jul  1 12:06:06.789: INFO: stderr: ""
Jul  1 12:06:06.789: INFO: stdout: "Name:         ss2-1\nNamespace:    statefulset-8828\nPriority:     0\nNode:         kali-worker2/172.17.0.18\nStart Time:   Wed, 01 Jul 2020 11:55:48 +0000\nLabels:       baz=blah\n              controller-revision-hash=ss2-84f9d6bf57\n              foo=bar\n              statefulset.kubernetes.io/pod-name=ss2-1\nAnnotations:  \nStatus:       Running\nIP:           10.244.1.189\nIPs:\n  IP:           10.244.1.189\nControlled By:  StatefulSet/ss2\nContainers:\n  webserver:\n    Container ID:   containerd://996257fdca154e180f2572f6ec3d3a2ba5d04826d6ec0a33b314ad6fd5d31b17\n    Image:          docker.io/library/httpd:2.4.39-alpine\n    Image ID:       docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a\n    Port:           \n    Host Port:      \n    State:          Running\n      Started:      Wed, 01 Jul 2020 11:59:14 +0000\n    Last State:     Terminated\n      Exit Code:    0\n      Started:      Mon, 01 Jan 0001 00:00:00 +0000\n      Finished:     Mon, 01 Jan 0001 00:00:00 +0000\n    Ready:          True\n    Restart Count:  1\n    Readiness:      http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zxhdh (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-zxhdh:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-zxhdh\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason     Age                   From                   Message\n  ----     ------     ----                  ----                   -------\n  Normal   Scheduled               default-scheduler      Successfully assigned statefulset-8828/ss2-1 to kali-worker2\n  Warning  Failed     8m9s                  kubelet, kali-worker2  Error: context deadline exceeded\n  Warning  Failed     7m28s (x4 over 8m8s)  kubelet, kali-worker2  Error: failed to reserve container name \"webserver_ss2-1_statefulset-8828_201a2caf-081b-4a2c-bbe8-ff9f48cec630_0\": name \"webserver_ss2-1_statefulset-8828_201a2caf-081b-4a2c-bbe8-ff9f48cec630_0\" is reserved for \"e7b264680089225d29873a3b30354bdea046e4ab0055e02919074e8ab5ca320c\"\n  Normal   Pulled     7m16s (x6 over 10m)   kubelet, kali-worker2  Container image \"docker.io/library/httpd:2.4.39-alpine\" already present on machine\n  Normal   Created    6m53s                 kubelet, kali-worker2  Created container webserver\n  Normal   Started    6m52s                 kubelet, kali-worker2  Started container webserver\n  Warning  Unhealthy  6m52s                 kubelet, kali-worker2  Readiness probe failed: Get http://10.244.1.189:80/index.html: dial tcp 10.244.1.189:80: connect: connection refused\n"
Jul  1 12:06:06.789: INFO: 
Output of kubectl describe ss2-1:
Name:         ss2-1
Namespace:    statefulset-8828
Priority:     0
Node:         kali-worker2/172.17.0.18
Start Time:   Wed, 01 Jul 2020 11:55:48 +0000
Labels:       baz=blah
              controller-revision-hash=ss2-84f9d6bf57
              foo=bar
              statefulset.kubernetes.io/pod-name=ss2-1
Annotations:  
Status:       Running
IP:           10.244.1.189
IPs:
  IP:           10.244.1.189
Controlled By:  StatefulSet/ss2
Containers:
  webserver:
    Container ID:   containerd://996257fdca154e180f2572f6ec3d3a2ba5d04826d6ec0a33b314ad6fd5d31b17
    Image:          docker.io/library/httpd:2.4.39-alpine
    Image ID:       docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a
    Port:           
    Host Port:      
    State:          Running
      Started:      Wed, 01 Jul 2020 11:59:14 +0000
    Last State:     Terminated
      Exit Code:    0
      Started:      Mon, 01 Jan 0001 00:00:00 +0000
      Finished:     Mon, 01 Jan 0001 00:00:00 +0000
    Ready:          True
    Restart Count:  1
    Readiness:      http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zxhdh (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-zxhdh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-zxhdh
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                   From                   Message
  ----     ------     ----                  ----                   -------
  Normal   Scheduled               default-scheduler      Successfully assigned statefulset-8828/ss2-1 to kali-worker2
  Warning  Failed     8m9s                  kubelet, kali-worker2  Error: context deadline exceeded
  Warning  Failed     7m28s (x4 over 8m8s)  kubelet, kali-worker2  Error: failed to reserve container name "webserver_ss2-1_statefulset-8828_201a2caf-081b-4a2c-bbe8-ff9f48cec630_0": name "webserver_ss2-1_statefulset-8828_201a2caf-081b-4a2c-bbe8-ff9f48cec630_0" is reserved for "e7b264680089225d29873a3b30354bdea046e4ab0055e02919074e8ab5ca320c"
  Normal   Pulled     7m16s (x6 over 10m)   kubelet, kali-worker2  Container image "docker.io/library/httpd:2.4.39-alpine" already present on machine
  Normal   Created    6m53s                 kubelet, kali-worker2  Created container webserver
  Normal   Started    6m52s                 kubelet, kali-worker2  Started container webserver
  Warning  Unhealthy  6m52s                 kubelet, kali-worker2  Readiness probe failed: Get http://10.244.1.189:80/index.html: dial tcp 10.244.1.189:80: connect: connection refused

Jul  1 12:06:06.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs ss2-1 --namespace=statefulset-8828 --tail=100'
Jul  1 12:06:06.895: INFO: stderr: ""
Jul  1 12:06:06.895: INFO: stdout: "10.244.1.1 - - [01/Jul/2020:12:04:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:31 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:32 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:33 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:34 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:35 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:36 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:04:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:31 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:32 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:33 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:34 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:35 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:36 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:05:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:06:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:06:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:06:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:06:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:06:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [01/Jul/2020:12:06:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n"
Jul  1 12:06:06.896: INFO: 
Last 100 log lines of ss2-1:
10.244.1.1 - - [01/Jul/2020:12:04:26 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:27 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:28 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:29 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:30 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:31 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:32 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:33 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:34 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:35 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:36 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:37 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:38 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:39 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:40 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:41 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:42 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:43 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:44 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:45 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:46 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:47 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:48 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:49 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:50 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:51 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:52 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:53 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:54 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:55 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:56 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:57 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:58 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:04:59 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:00 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:01 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:02 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:03 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:04 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:05 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:06 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:07 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:08 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:09 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:10 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:11 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:12 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:13 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:14 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:15 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:16 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:17 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:18 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:19 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:20 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:21 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:22 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:23 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:24 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:25 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:26 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:27 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:28 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:29 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:30 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:31 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:32 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:33 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:34 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:35 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:36 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:37 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:38 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:39 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:40 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:41 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:42 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:43 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:44 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:45 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:46 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:47 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:48 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:49 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:50 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:51 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:52 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:53 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:54 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:55 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:56 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:57 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:58 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:05:59 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:06:00 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:06:01 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:06:02 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:06:03 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:06:04 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.1.1 - - [01/Jul/2020:12:06:05 +0000] "GET /index.html HTTP/1.1" 200 45

Jul  1 12:06:06.896: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe po ss2-2 --namespace=statefulset-8828'
Jul  1 12:06:07.009: INFO: stderr: ""
Jul  1 12:06:07.009: INFO: stdout: "Name:         ss2-2\nNamespace:    statefulset-8828\nPriority:     0\nNode:         kali-worker/172.17.0.15\nStart Time:   Wed, 01 Jul 2020 11:55:04 +0000\nLabels:       baz=blah\n              controller-revision-hash=ss2-84f9d6bf57\n              foo=bar\n              statefulset.kubernetes.io/pod-name=ss2-2\nAnnotations:  \nStatus:       Running\nIP:           10.244.2.184\nIPs:\n  IP:           10.244.2.184\nControlled By:  StatefulSet/ss2\nContainers:\n  webserver:\n    Container ID:   containerd://82ed52194417439c96400a38dcf8c954289109f3454b3a56c8d1b41430a08c15\n    Image:          docker.io/library/httpd:2.4.39-alpine\n    Image ID:       docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a\n    Port:           \n    Host Port:      \n    State:          Running\n      Started:      Wed, 01 Jul 2020 11:55:10 +0000\n    Ready:          True\n    Restart Count:  0\n    Readiness:      http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zxhdh (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-zxhdh:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-zxhdh\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                  Message\n  ----    ------     ----       ----                  -------\n  Normal  Scheduled    default-scheduler     Successfully assigned statefulset-8828/ss2-2 to kali-worker\n  Normal  Pulled     11m        kubelet, kali-worker  Container image \"docker.io/library/httpd:2.4.39-alpine\" already present on machine\n  Normal  Created    10m        kubelet, kali-worker  Created container webserver\n  Normal  Started    10m        kubelet, kali-worker  Started container webserver\n"
Jul  1 12:06:07.010: INFO: 
Output of kubectl describe ss2-2:
Name:         ss2-2
Namespace:    statefulset-8828
Priority:     0
Node:         kali-worker/172.17.0.15
Start Time:   Wed, 01 Jul 2020 11:55:04 +0000
Labels:       baz=blah
              controller-revision-hash=ss2-84f9d6bf57
              foo=bar
              statefulset.kubernetes.io/pod-name=ss2-2
Annotations:  
Status:       Running
IP:           10.244.2.184
IPs:
  IP:           10.244.2.184
Controlled By:  StatefulSet/ss2
Containers:
  webserver:
    Container ID:   containerd://82ed52194417439c96400a38dcf8c954289109f3454b3a56c8d1b41430a08c15
    Image:          docker.io/library/httpd:2.4.39-alpine
    Image ID:       docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a
    Port:           
    Host Port:      
    State:          Running
      Started:      Wed, 01 Jul 2020 11:55:10 +0000
    Ready:          True
    Restart Count:  0
    Readiness:      http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zxhdh (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-zxhdh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-zxhdh
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age        From                  Message
  ----    ------     ----       ----                  -------
  Normal  Scheduled    default-scheduler     Successfully assigned statefulset-8828/ss2-2 to kali-worker
  Normal  Pulled     11m        kubelet, kali-worker  Container image "docker.io/library/httpd:2.4.39-alpine" already present on machine
  Normal  Created    10m        kubelet, kali-worker  Created container webserver
  Normal  Started    10m        kubelet, kali-worker  Started container webserver

Jul  1 12:06:07.010: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs ss2-2 --namespace=statefulset-8828 --tail=100'
Jul  1 12:06:07.116: INFO: stderr: ""
Jul  1 12:06:07.116: INFO: stdout: "10.244.2.1 - - [01/Jul/2020:12:04:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:31 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:32 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:33 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:34 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:35 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:36 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:04:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:31 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:32 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:33 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:34 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:35 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:36 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:05:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:06:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:06:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:06:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:06:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:06:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:06:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [01/Jul/2020:12:06:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n"
Jul  1 12:06:07.117: INFO: 
Last 100 log lines of ss2-2:
10.244.2.1 - - [01/Jul/2020:12:04:27 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:28 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:29 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:30 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:31 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:32 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:33 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:34 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:35 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:36 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:37 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:38 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:39 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:40 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:41 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:42 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:43 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:44 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:45 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:46 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:47 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:48 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:49 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:50 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:51 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:52 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:53 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:54 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:55 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:56 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:57 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:58 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:04:59 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:00 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:01 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:02 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:03 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:04 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:05 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:06 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:07 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:08 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:09 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:10 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:11 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:12 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:13 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:14 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:15 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:16 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:17 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:18 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:19 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:20 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:21 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:22 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:23 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:24 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:25 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:26 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:27 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:28 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:29 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:30 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:31 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:32 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:33 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:34 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:35 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:36 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:37 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:38 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:39 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:40 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:41 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:42 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:43 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:44 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:45 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:46 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:47 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:48 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:49 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:50 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:51 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:52 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:53 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:54 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:55 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:56 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:57 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:58 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:05:59 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:06:00 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:06:01 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:06:02 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:06:03 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:06:04 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:06:05 +0000] "GET /index.html HTTP/1.1" 200 45
10.244.2.1 - - [01/Jul/2020:12:06:06 +0000] "GET /index.html HTTP/1.1" 200 45

Jul  1 12:06:07.117: INFO: Deleting all statefulset in ns statefulset-8828
Jul  1 12:06:07.119: INFO: Scaling statefulset ss2 to 0
Jul  1 12:11:57.909: INFO: Waiting for statefulset status.replicas updated to 0
Jul  1 12:11:57.912: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
STEP: Collecting events from namespace "statefulset-8828".
STEP: Found 64 events.
Jul  1 12:11:57.931: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-0: {default-scheduler } Scheduled: Successfully assigned statefulset-8828/ss2-0 to kali-worker
Jul  1 12:11:57.931: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-0: {default-scheduler } Scheduled: Successfully assigned statefulset-8828/ss2-0 to kali-worker2
Jul  1 12:11:57.931: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-0: {default-scheduler } Scheduled: Successfully assigned statefulset-8828/ss2-0 to kali-worker
Jul  1 12:11:57.931: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-1: {default-scheduler } Scheduled: Successfully assigned statefulset-8828/ss2-1 to kali-worker2
Jul  1 12:11:57.931: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-1: {default-scheduler } Scheduled: Successfully assigned statefulset-8828/ss2-1 to kali-worker2
Jul  1 12:11:57.931: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-2: {default-scheduler } Scheduled: Successfully assigned statefulset-8828/ss2-2 to kali-worker2
Jul  1 12:11:57.931: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-2: {default-scheduler } Scheduled: Successfully assigned statefulset-8828/ss2-2 to kali-worker
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:43:34 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-0 in StatefulSet ss2 successful
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:43:46 +0000 UTC - event for ss2-0: {kubelet kali-worker} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:45:05 +0000 UTC - event for ss2-0: {kubelet kali-worker} Created: Created container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:45:11 +0000 UTC - event for ss2-0: {kubelet kali-worker} Started: Started container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:45:13 +0000 UTC - event for ss2-0: {kubelet kali-worker} Unhealthy: Readiness probe failed: Get http://10.244.2.182:80/index.html: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:45:54 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-1 in StatefulSet ss2 successful
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:45:56 +0000 UTC - event for ss2-1: {kubelet kali-worker2} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:46:04 +0000 UTC - event for ss2-1: {kubelet kali-worker2} Started: Started container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:46:04 +0000 UTC - event for ss2-1: {kubelet kali-worker2} Created: Created container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:46:05 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-2 in StatefulSet ss2 successful
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:46:08 +0000 UTC - event for ss2-2: {kubelet kali-worker2} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:46:37 +0000 UTC - event for ss2-2: {kubelet kali-worker2} Created: Created container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:46:39 +0000 UTC - event for ss2-2: {kubelet kali-worker2} Started: Started container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:47:13 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-2 in StatefulSet ss2 successful
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:47:13 +0000 UTC - event for ss2-2: {kubelet kali-worker2} Killing: Stopping container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:47:14 +0000 UTC - event for ss2-2: {kubelet kali-worker2} Unhealthy: Readiness probe failed: Get http://10.244.1.188:80/index.html: dial tcp 10.244.1.188:80: connect: connection refused
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:51:40 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-2 in StatefulSet ss2 successful
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:51:44 +0000 UTC - event for ss2-0: {kubelet kali-worker} Unhealthy: Readiness probe failed: Get http://10.244.2.182:80/index.html: dial tcp 10.244.2.182:80: connect: connection refused
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:51:44 +0000 UTC - event for ss2-0: {kubelet kali-worker} Killing: Stopping container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:51:46 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-0 in StatefulSet ss2 successful
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:53:23 +0000 UTC - event for ss2-0: {kubelet kali-worker} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:54:59 +0000 UTC - event for ss2-0: {kubelet kali-worker} Created: Created container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:55:00 +0000 UTC - event for ss2-0: {kubelet kali-worker} Started: Started container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:55:05 +0000 UTC - event for ss2-2: {kubelet kali-worker} Pulled: Container image "docker.io/library/httpd:2.4.39-alpine" already present on machine
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:55:10 +0000 UTC - event for ss2-2: {kubelet kali-worker} Created: Created container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:55:11 +0000 UTC - event for ss2-2: {kubelet kali-worker} Started: Started container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:55:23 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-1 in StatefulSet ss2 successful
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:55:23 +0000 UTC - event for ss2-1: {kubelet kali-worker2} Killing: Stopping container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:55:24 +0000 UTC - event for ss2-1: {kubelet kali-worker2} Unhealthy: Readiness probe failed: Get http://10.244.1.187:80/index.html: dial tcp 10.244.1.187:80: connect: connection refused
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:55:36 +0000 UTC - event for ss2-1: {kubelet kali-worker2} Unhealthy: Readiness probe failed: Get http://10.244.1.187:80/index.html: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:55:45 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-1 in StatefulSet ss2 successful
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:55:55 +0000 UTC - event for ss2-1: {kubelet kali-worker2} Pulled: Container image "docker.io/library/httpd:2.4.39-alpine" already present on machine
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:57:57 +0000 UTC - event for ss2-1: {kubelet kali-worker2} Failed: Error: context deadline exceeded
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:57:58 +0000 UTC - event for ss2-1: {kubelet kali-worker2} Failed: Error: failed to reserve container name "webserver_ss2-1_statefulset-8828_201a2caf-081b-4a2c-bbe8-ff9f48cec630_0": name "webserver_ss2-1_statefulset-8828_201a2caf-081b-4a2c-bbe8-ff9f48cec630_0" is reserved for "e7b264680089225d29873a3b30354bdea046e4ab0055e02919074e8ab5ca320c"
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:59:13 +0000 UTC - event for ss2-1: {kubelet kali-worker2} Created: Created container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:59:14 +0000 UTC - event for ss2-1: {kubelet kali-worker2} Unhealthy: Readiness probe failed: Get http://10.244.1.189:80/index.html: dial tcp 10.244.1.189:80: connect: connection refused
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:59:14 +0000 UTC - event for ss2-1: {kubelet kali-worker2} Started: Started container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:59:16 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-0 in StatefulSet ss2 successful
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:59:16 +0000 UTC - event for ss2-0: {kubelet kali-worker} Killing: Stopping container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 11:59:16 +0000 UTC - event for ss2-0: {kubelet kali-worker} Unhealthy: Readiness probe failed: Get http://10.244.2.183:80/index.html: dial tcp 10.244.2.183:80: connect: connection refused
Jul  1 12:11:57.931: INFO: At 2020-07-01 12:02:22 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-0 in StatefulSet ss2 successful
Jul  1 12:11:57.931: INFO: At 2020-07-01 12:02:23 +0000 UTC - event for ss2-0: {kubelet kali-worker2} Pulled: Container image "docker.io/library/httpd:2.4.39-alpine" already present on machine
Jul  1 12:11:57.931: INFO: At 2020-07-01 12:04:23 +0000 UTC - event for ss2-0: {kubelet kali-worker2} Failed: Error: context deadline exceeded
Jul  1 12:11:57.931: INFO: At 2020-07-01 12:04:24 +0000 UTC - event for ss2-0: {kubelet kali-worker2} Failed: Error: failed to reserve container name "webserver_ss2-0_statefulset-8828_4691d211-f328-4549-97bc-f4ab572ddb68_0": name "webserver_ss2-0_statefulset-8828_4691d211-f328-4549-97bc-f4ab572ddb68_0" is reserved for "01939ac9478b14844d7d7068c4dd665cbe4870cb849a0181c074a36bbad85111"
Jul  1 12:11:57.931: INFO: At 2020-07-01 12:06:10 +0000 UTC - event for ss2-0: {kubelet kali-worker2} Created: Created container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 12:06:10 +0000 UTC - event for ss2-0: {kubelet kali-worker2} Started: Started container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 12:06:40 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-2 in StatefulSet ss2 successful
Jul  1 12:11:57.931: INFO: At 2020-07-01 12:06:40 +0000 UTC - event for ss2-2: {kubelet kali-worker} Killing: Stopping container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 12:06:41 +0000 UTC - event for ss2-2: {kubelet kali-worker} Unhealthy: Readiness probe failed: Get http://10.244.2.184:80/index.html: dial tcp 10.244.2.184:80: connect: connection refused
Jul  1 12:11:57.931: INFO: At 2020-07-01 12:06:41 +0000 UTC - event for ss2-2: {kubelet kali-worker} Unhealthy: Readiness probe failed: Get http://10.244.2.184:80/index.html: read tcp 10.244.2.1:47172->10.244.2.184:80: read: connection reset by peer
Jul  1 12:11:57.931: INFO: At 2020-07-01 12:10:06 +0000 UTC - event for ss2-2: {kubelet kali-worker} FailedKillPod: error killing pod: failed to "KillContainer" for "webserver" with KillContainerError: "rpc error: code = DeadlineExceeded desc = context deadline exceeded"
Jul  1 12:11:57.931: INFO: At 2020-07-01 12:10:09 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-1 in StatefulSet ss2 successful
Jul  1 12:11:57.931: INFO: At 2020-07-01 12:10:09 +0000 UTC - event for ss2-1: {kubelet kali-worker2} Killing: Stopping container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 12:10:59 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-0 in StatefulSet ss2 successful
Jul  1 12:11:57.931: INFO: At 2020-07-01 12:10:59 +0000 UTC - event for ss2-0: {kubelet kali-worker2} Killing: Stopping container webserver
Jul  1 12:11:57.931: INFO: At 2020-07-01 12:11:00 +0000 UTC - event for ss2-0: {kubelet kali-worker2} Unhealthy: Readiness probe failed: Get http://10.244.1.190:80/index.html: read tcp 10.244.1.1:44620->10.244.1.190:80: read: connection reset by peer
Jul  1 12:11:57.931: INFO: At 2020-07-01 12:11:00 +0000 UTC - event for ss2-0: {kubelet kali-worker2} Unhealthy: Readiness probe failed: Get http://10.244.1.190:80/index.html: dial tcp 10.244.1.190:80: connect: connection refused
Jul  1 12:11:57.935: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul  1 12:11:57.935: INFO: 
Jul  1 12:11:57.939: INFO: 
Logging node info for node kali-control-plane
Jul  1 12:11:57.942: INFO: Node Info: &Node{ObjectMeta:{kali-control-plane   /api/v1/nodes/kali-control-plane 84a583c8-90fb-49f1-81ac-1fbe141d1a1c 16800434 0 2020-04-29 09:30:59 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kali-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-04-29 09:31:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 110 111 100 101 45 114 111 108 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 115 116 101 114 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2020-04-29 09:31:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 48 46 48 47 50 52 92 34 34 58 123 125 125 44 34 102 58 116 97 105 110 116 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 12:10:07 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-01 12:10:07 +0000 UTC,LastTransitionTime:2020-04-29 09:30:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-01 12:10:07 +0000 UTC,LastTransitionTime:2020-04-29 09:30:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-01 12:10:07 +0000 UTC,LastTransitionTime:2020-04-29 09:30:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-01 12:10:07 +0000 UTC,LastTransitionTime:2020-04-29 09:31:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.19,},NodeAddress{Type:Hostname,Address:kali-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2146cf85bed648199604ab2e0e9ac609,SystemUUID:e83c0db4-babe-44fc-9dad-b5eeae6d23fd,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jul  1 12:11:57.942: INFO: 
Logging kubelet events for node kali-control-plane
Jul  1 12:11:57.944: INFO: 
Logging pods the kubelet thinks is on node kali-control-plane
Jul  1 12:11:57.959: INFO: etcd-kali-control-plane started at 2020-04-29 09:31:04 +0000 UTC (0+1 container statuses recorded)
Jul  1 12:11:57.959: INFO: 	Container etcd ready: true, restart count 3
Jul  1 12:11:57.959: INFO: coredns-66bff467f8-rvq2k started at 2020-04-29 09:31:37 +0000 UTC (0+1 container statuses recorded)
Jul  1 12:11:57.959: INFO: 	Container coredns ready: true, restart count 0
Jul  1 12:11:57.959: INFO: kube-scheduler-kali-control-plane started at 2020-04-29 09:31:04 +0000 UTC (0+1 container statuses recorded)
Jul  1 12:11:57.959: INFO: 	Container kube-scheduler ready: true, restart count 128
Jul  1 12:11:57.959: INFO: kube-proxy-pnhtq started at 2020-04-29 09:31:19 +0000 UTC (0+1 container statuses recorded)
Jul  1 12:11:57.959: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  1 12:11:57.959: INFO: kindnet-65djz started at 2020-04-29 09:31:19 +0000 UTC (0+1 container statuses recorded)
Jul  1 12:11:57.959: INFO: 	Container kindnet-cni ready: true, restart count 4
Jul  1 12:11:57.959: INFO: coredns-66bff467f8-w6zxd started at 2020-04-29 09:31:37 +0000 UTC (0+1 container statuses recorded)
Jul  1 12:11:57.959: INFO: 	Container coredns ready: true, restart count 0
Jul  1 12:11:57.959: INFO: local-path-provisioner-bd4bb6b75-6l9ph started at 2020-04-29 09:31:37 +0000 UTC (0+1 container statuses recorded)
Jul  1 12:11:57.959: INFO: 	Container local-path-provisioner ready: true, restart count 94
Jul  1 12:11:57.959: INFO: kube-apiserver-kali-control-plane started at 2020-04-29 09:31:04 +0000 UTC (0+1 container statuses recorded)
Jul  1 12:11:57.959: INFO: 	Container kube-apiserver ready: true, restart count 5
Jul  1 12:11:57.959: INFO: kube-controller-manager-kali-control-plane started at 2020-04-29 09:31:04 +0000 UTC (0+1 container statuses recorded)
Jul  1 12:11:57.959: INFO: 	Container kube-controller-manager ready: true, restart count 130
W0701 12:11:57.961813       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  1 12:11:58.072: INFO: 
Latency metrics for node kali-control-plane
Jul  1 12:11:58.072: INFO: 
Logging node info for node kali-worker
Jul  1 12:11:58.164: INFO: Node Info: &Node{ObjectMeta:{kali-worker   /api/v1/nodes/kali-worker d9882acc-073c-45e9-9299-9096bf571d2e 16800460 0 2020-04-29 09:31:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kali-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-04-29 09:31:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2020-04-29 09:32:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 50 46 48 47 50 52 92 34 34 58 123 125 125 125 125],}} {kubelet Update v1 2020-07-01 12:10:12 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-01 12:10:12 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-01 12:10:12 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-01 12:10:12 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-01 12:10:12 +0000 UTC,LastTransitionTime:2020-04-29 09:32:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.15,},NodeAddress{Type:Hostname,Address:kali-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e96e6d32a4f2448f9fda0690bf27c25a,SystemUUID:62c26944-edd7-4df2-a453-f2dbfa247f6d,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:d0af3efaa83cf2106879b7fd3972faaee44a0d4a82db97b27f33f8c71aa450b3 docker.io/aquasec/kube-hunter:latest],SizeBytes:127384616,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:9e6d47f5fb42621781fac92b9f8f86a7e00596fd5c022472a51d33b8c6638b85],SizeBytes:126124611,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:5a7b70d343cfaeff79f6e6a8f473983a5eb7ca52f723aa8aa226aad4ee5b96e3],SizeBytes:125323634,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:795d89480038d62363491066edd962a3f0042c338d4d9feb3f4db23ac659fb40],SizeBytes:124499152,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13],SizeBytes:45704260,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:45599269,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:1e2b01ec091289327cd7e1b527c11b95db710ace489c9bd665c0d771c0225729],SizeBytes:8039938,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:9d86125c0409a16346857dbda530cf29583c87f186281745f539c12e3dcd38a7],SizeBytes:8039918,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:bdfc3a8aeed63e545ab0df01806707219ffb785bca75e08cbee043075dedfb3c],SizeBytes:8039898,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:a3fe72ad3946d830134b92e5c922a92d4aeb594f0445d178f9e2d610b1be04b5],SizeBytes:8039861,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:6da1996cf654bbc10175028832d6ffb92720272d0deca971bb296ea9092f4273],SizeBytes:8039845,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:5979eaa13cb8b9b2027f4e75bb350a5af70d73719f2a260fa50f593ef63e857b docker.io/aquasec/kube-bench:latest],SizeBytes:8038593,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:bab47f459428d6cc682ec6b7cffd4203ce53c413748fe366f2533d0cda2808ce],SizeBytes:8037981,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:cab37ac2de78ddbc6655eddae38239ebafdf79a7934bc53361e1524a2ed5ab56],SizeBytes:8035885,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:ee55386ef35bea93a3a0900fd714038bebd156e0448addf839f38093dbbaace9],SizeBytes:8029111,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793 docker.io/library/busybox:latest],SizeBytes:767885,},ContainerImage{Names:[docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12],SizeBytes:764955,},ContainerImage{Names:[docker.io/library/busybox@sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c],SizeBytes:764948,},ContainerImage{Names:[docker.io/library/busybox@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209],SizeBytes:764556,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jul  1 12:11:58.165: INFO: 
Logging kubelet events for node kali-worker
Jul  1 12:11:58.179: INFO: 
Logging pods the kubelet thinks is on node kali-worker
Jul  1 12:11:58.194: INFO: kindnet-f8plf started at 2020-04-29 09:31:40 +0000 UTC (0+1 container statuses recorded)
Jul  1 12:11:58.194: INFO: 	Container kindnet-cni ready: true, restart count 7
Jul  1 12:11:58.194: INFO: kube-proxy-vrswj started at 2020-04-29 09:31:40 +0000 UTC (0+1 container statuses recorded)
Jul  1 12:11:58.194: INFO: 	Container kube-proxy ready: true, restart count 0
W0701 12:11:58.197475       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  1 12:11:58.233: INFO: 
Latency metrics for node kali-worker
Jul  1 12:11:58.233: INFO: 
Logging node info for node kali-worker2
Jul  1 12:11:58.236: INFO: Node Info: &Node{ObjectMeta:{kali-worker2   /api/v1/nodes/kali-worker2 6eb4ebcc-ce4f-4a4d-bd7f-5f7e293c044e 16800372 0 2020-04-29 09:31:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kali-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-04-29 09:31:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2020-04-29 09:32:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 49 46 48 47 50 52 92 34 34 58 123 125 125 125 125],}} {kubelet Update v1 2020-07-01 12:09:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-01 12:09:48 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-01 12:09:48 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-01 12:09:48 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-01 12:09:48 +0000 UTC,LastTransitionTime:2020-04-29 09:32:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.18,},NodeAddress{Type:Hostname,Address:kali-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e6c808dc84074a009430113a4db25a88,SystemUUID:a7f2e4d4-2bac-4d1a-b10e-f9b7d6d56664,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:f1fe1b322f9d19bc9e403709300c37f50c8c855767bc2fbecb081576f83b42d4 docker.io/aquasec/kube-hunter:latest],SizeBytes:127866893,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:31a93c2501d1648258f610a15bbf40a41d4f10c319a621d5f8ab077d87fcf4b7],SizeBytes:127839307,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:d0af3efaa83cf2106879b7fd3972faaee44a0d4a82db97b27f33f8c71aa450b3],SizeBytes:127384616,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:9e6d47f5fb42621781fac92b9f8f86a7e00596fd5c022472a51d33b8c6638b85],SizeBytes:126124611,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:5a7b70d343cfaeff79f6e6a8f473983a5eb7ca52f723aa8aa226aad4ee5b96e3],SizeBytes:125323634,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:795d89480038d62363491066edd962a3f0042c338d4d9feb3f4db23ac659fb40],SizeBytes:124499152,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12f377200949c25fde1e54bba639d34d119edd7cfcfb1d117526dba677c03c85 k8s.gcr.io/etcd:3.4.7],SizeBytes:104221097,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13],SizeBytes:45704260,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:45599269,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:1e2b01ec091289327cd7e1b527c11b95db710ace489c9bd665c0d771c0225729],SizeBytes:8039938,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:9d86125c0409a16346857dbda530cf29583c87f186281745f539c12e3dcd38a7],SizeBytes:8039918,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:bdfc3a8aeed63e545ab0df01806707219ffb785bca75e08cbee043075dedfb3c],SizeBytes:8039898,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:a3fe72ad3946d830134b92e5c922a92d4aeb594f0445d178f9e2d610b1be04b5],SizeBytes:8039861,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:6da1996cf654bbc10175028832d6ffb92720272d0deca971bb296ea9092f4273],SizeBytes:8039845,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:bab47f459428d6cc682ec6b7cffd4203ce53c413748fe366f2533d0cda2808ce docker.io/aquasec/kube-bench:latest],SizeBytes:8037981,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793 docker.io/library/busybox:latest],SizeBytes:767885,},ContainerImage{Names:[docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12],SizeBytes:764955,},ContainerImage{Names:[docker.io/library/busybox@sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c],SizeBytes:764948,},ContainerImage{Names:[docker.io/library/busybox@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209],SizeBytes:764556,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jul  1 12:11:58.237: INFO: 
Logging kubelet events for node kali-worker2
Jul  1 12:11:58.240: INFO: 
Logging pods the kubelet thinks is on node kali-worker2
Jul  1 12:11:58.257: INFO: kindnet-mcdh2 started at 2020-04-29 09:31:40 +0000 UTC (0+1 container statuses recorded)
Jul  1 12:11:58.257: INFO: 	Container kindnet-cni ready: true, restart count 5
Jul  1 12:11:58.257: INFO: kube-proxy-mmnb6 started at 2020-04-29 09:31:40 +0000 UTC (0+1 container statuses recorded)
Jul  1 12:11:58.257: INFO: 	Container kube-proxy ready: true, restart count 0
W0701 12:11:58.260265       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  1 12:11:58.322: INFO: 
Latency metrics for node kali-worker2
Jul  1 12:11:58.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8828" for this suite.

• Failure [1704.554 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703

    Jul  1 12:06:01.494: Failed waiting for state update: timed out waiting for the condition

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:74
------------------------------
{"msg":"FAILED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":158,"skipped":2747,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:11:58.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  1 12:11:58.928: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  1 12:12:00.937: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202319, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:12:02.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202319, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:12:05.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202319, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:12:06.943: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202319, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:12:09.475: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202319, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:12:11.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202319, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:12:13.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202319, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:12:15.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202319, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:12:17.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202319, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:12:19.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202319, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:12:21.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202319, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:12:22.982: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202319, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:12:24.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202319, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:12:26.940: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202319, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202318, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  1 12:12:31.365: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:12:34.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4364" for this suite.
STEP: Destroying namespace "webhook-4364-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:36.940 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":159,"skipped":2751,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:12:35.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 12:12:35.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Jul  1 12:12:36.200: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-01T12:12:36Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-01T12:12:36Z]] name:name1 resourceVersion:16800930 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:adc1435a-588f-4f3b-a522-2aae29ebc0a8] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Jul  1 12:12:46.205: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-01T12:12:46Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-01T12:12:46Z]] name:name2 resourceVersion:16800973 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1eb8dec9-2745-4e34-9738-c72321693a4e] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Jul  1 12:12:56.211: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-01T12:12:36Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-01T12:12:56Z]] name:name1 resourceVersion:16801001 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:adc1435a-588f-4f3b-a522-2aae29ebc0a8] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Jul  1 12:13:06.215: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-01T12:12:46Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-01T12:13:06Z]] name:name2 resourceVersion:16801031 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1eb8dec9-2745-4e34-9738-c72321693a4e] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Jul  1 12:13:16.396: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-01T12:12:36Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-01T12:12:56Z]] name:name1 resourceVersion:16801055 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:adc1435a-588f-4f3b-a522-2aae29ebc0a8] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Jul  1 12:13:26.617: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-01T12:12:46Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-01T12:13:06Z]] name:name2 resourceVersion:16801081 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1eb8dec9-2745-4e34-9738-c72321693a4e] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:13:37.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-258" for this suite.

• [SLOW TEST:61.867 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":160,"skipped":2762,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:13:37.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul  1 12:13:37.249: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d43e3b1-cb46-4abd-8fcf-d7aad6080bde" in namespace "projected-5721" to be "Succeeded or Failed"
Jul  1 12:13:37.253: INFO: Pod "downwardapi-volume-3d43e3b1-cb46-4abd-8fcf-d7aad6080bde": Phase="Pending", Reason="", readiness=false. Elapsed: 3.832878ms
Jul  1 12:13:39.266: INFO: Pod "downwardapi-volume-3d43e3b1-cb46-4abd-8fcf-d7aad6080bde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016194114s
Jul  1 12:13:41.269: INFO: Pod "downwardapi-volume-3d43e3b1-cb46-4abd-8fcf-d7aad6080bde": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019069856s
Jul  1 12:13:43.954: INFO: Pod "downwardapi-volume-3d43e3b1-cb46-4abd-8fcf-d7aad6080bde": Phase="Pending", Reason="", readiness=false. Elapsed: 6.704998893s
Jul  1 12:13:45.958: INFO: Pod "downwardapi-volume-3d43e3b1-cb46-4abd-8fcf-d7aad6080bde": Phase="Pending", Reason="", readiness=false. Elapsed: 8.708153142s
Jul  1 12:13:47.961: INFO: Pod "downwardapi-volume-3d43e3b1-cb46-4abd-8fcf-d7aad6080bde": Phase="Running", Reason="", readiness=true. Elapsed: 10.711841683s
Jul  1 12:13:49.972: INFO: Pod "downwardapi-volume-3d43e3b1-cb46-4abd-8fcf-d7aad6080bde": Phase="Running", Reason="", readiness=true. Elapsed: 12.722381651s
Jul  1 12:13:51.976: INFO: Pod "downwardapi-volume-3d43e3b1-cb46-4abd-8fcf-d7aad6080bde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.726099184s
STEP: Saw pod success
Jul  1 12:13:51.976: INFO: Pod "downwardapi-volume-3d43e3b1-cb46-4abd-8fcf-d7aad6080bde" satisfied condition "Succeeded or Failed"
Jul  1 12:13:51.979: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-3d43e3b1-cb46-4abd-8fcf-d7aad6080bde container client-container: 
STEP: delete the pod
Jul  1 12:13:52.254: INFO: Waiting for pod downwardapi-volume-3d43e3b1-cb46-4abd-8fcf-d7aad6080bde to disappear
Jul  1 12:13:52.275: INFO: Pod downwardapi-volume-3d43e3b1-cb46-4abd-8fcf-d7aad6080bde no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:13:52.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5721" for this suite.

• [SLOW TEST:15.144 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":161,"skipped":2768,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:13:52.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-projected-all-test-volume-e0c5a55b-9ab4-4851-acb4-c1dd4d62bb5f
STEP: Creating secret with name secret-projected-all-test-volume-b1c1a48e-8311-455e-96dd-5617f4203be4
STEP: Creating a pod to test Check all projections for projected volume plugin
Jul  1 12:13:52.427: INFO: Waiting up to 5m0s for pod "projected-volume-5e44ea89-a725-49d4-bb28-8faf5cf85afb" in namespace "projected-2573" to be "Succeeded or Failed"
Jul  1 12:13:52.431: INFO: Pod "projected-volume-5e44ea89-a725-49d4-bb28-8faf5cf85afb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196652ms
Jul  1 12:13:54.958: INFO: Pod "projected-volume-5e44ea89-a725-49d4-bb28-8faf5cf85afb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.530912788s
Jul  1 12:13:56.961: INFO: Pod "projected-volume-5e44ea89-a725-49d4-bb28-8faf5cf85afb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.534685499s
Jul  1 12:14:00.058: INFO: Pod "projected-volume-5e44ea89-a725-49d4-bb28-8faf5cf85afb": Phase="Running", Reason="", readiness=true. Elapsed: 7.63166683s
Jul  1 12:14:02.061: INFO: Pod "projected-volume-5e44ea89-a725-49d4-bb28-8faf5cf85afb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.634217274s
STEP: Saw pod success
Jul  1 12:14:02.061: INFO: Pod "projected-volume-5e44ea89-a725-49d4-bb28-8faf5cf85afb" satisfied condition "Succeeded or Failed"
Jul  1 12:14:02.063: INFO: Trying to get logs from node kali-worker pod projected-volume-5e44ea89-a725-49d4-bb28-8faf5cf85afb container projected-all-volume-test: 
STEP: delete the pod
Jul  1 12:14:02.112: INFO: Waiting for pod projected-volume-5e44ea89-a725-49d4-bb28-8faf5cf85afb to disappear
Jul  1 12:14:02.749: INFO: Pod projected-volume-5e44ea89-a725-49d4-bb28-8faf5cf85afb no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:14:02.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2573" for this suite.

• [SLOW TEST:10.578 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":162,"skipped":2771,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:14:02.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  1 12:14:05.297: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  1 12:14:07.305: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:14:09.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:14:11.536: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:14:13.309: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:14:15.620: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:14:17.310: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:14:19.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202445, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  1 12:14:22.356: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:14:22.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1117" for this suite.
STEP: Destroying namespace "webhook-1117-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:19.861 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":163,"skipped":2802,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:14:22.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  1 12:14:23.725: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  1 12:14:25.732: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202463, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202463, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202464, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202463, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:14:28.230: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202463, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202463, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202464, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202463, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  1 12:14:30.771: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:14:30.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7821" for this suite.
STEP: Destroying namespace "webhook-7821-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.350 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":164,"skipped":2820,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:14:31.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-7670, will wait for the garbage collector to delete the pods
Jul  1 12:14:37.716: INFO: Deleting Job.batch foo took: 6.680998ms
Jul  1 12:14:37.816: INFO: Terminating Job.batch foo pods took: 100.232177ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:15:13.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7670" for this suite.

• [SLOW TEST:42.456 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":165,"skipped":2825,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:15:13.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:15:30.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7115" for this suite.

• [SLOW TEST:17.160 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":166,"skipped":2834,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:15:30.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-7094c3bc-e005-4e84-b405-f6a81be80ca4 in namespace container-probe-2807
Jul  1 12:15:34.832: INFO: Started pod busybox-7094c3bc-e005-4e84-b405-f6a81be80ca4 in namespace container-probe-2807
STEP: checking the pod's current state and verifying that restartCount is present
Jul  1 12:15:34.835: INFO: Initial restart count of pod busybox-7094c3bc-e005-4e84-b405-f6a81be80ca4 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:19:35.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2807" for this suite.

• [SLOW TEST:244.840 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":167,"skipped":2841,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:19:35.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-b035cc4b-ca10-40eb-83ec-22742665ee34
STEP: Creating a pod to test consume configMaps
Jul  1 12:19:35.623: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3c353f2e-605d-4e55-90e1-cf77f9d44feb" in namespace "projected-5952" to be "Succeeded or Failed"
Jul  1 12:19:35.688: INFO: Pod "pod-projected-configmaps-3c353f2e-605d-4e55-90e1-cf77f9d44feb": Phase="Pending", Reason="", readiness=false. Elapsed: 65.263285ms
Jul  1 12:19:37.754: INFO: Pod "pod-projected-configmaps-3c353f2e-605d-4e55-90e1-cf77f9d44feb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130926191s
Jul  1 12:19:39.758: INFO: Pod "pod-projected-configmaps-3c353f2e-605d-4e55-90e1-cf77f9d44feb": Phase="Running", Reason="", readiness=true. Elapsed: 4.134798118s
Jul  1 12:19:41.762: INFO: Pod "pod-projected-configmaps-3c353f2e-605d-4e55-90e1-cf77f9d44feb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.139472354s
STEP: Saw pod success
Jul  1 12:19:41.762: INFO: Pod "pod-projected-configmaps-3c353f2e-605d-4e55-90e1-cf77f9d44feb" satisfied condition "Succeeded or Failed"
Jul  1 12:19:41.766: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-3c353f2e-605d-4e55-90e1-cf77f9d44feb container projected-configmap-volume-test: 
STEP: delete the pod
Jul  1 12:19:41.817: INFO: Waiting for pod pod-projected-configmaps-3c353f2e-605d-4e55-90e1-cf77f9d44feb to disappear
Jul  1 12:19:42.005: INFO: Pod pod-projected-configmaps-3c353f2e-605d-4e55-90e1-cf77f9d44feb no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:19:42.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5952" for this suite.

• [SLOW TEST:6.485 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":168,"skipped":2849,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:19:42.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 12:19:42.257: INFO: Creating ReplicaSet my-hostname-basic-2552e365-9295-4890-980d-a066350e982d
Jul  1 12:19:42.426: INFO: Pod name my-hostname-basic-2552e365-9295-4890-980d-a066350e982d: Found 0 pods out of 1
Jul  1 12:19:47.431: INFO: Pod name my-hostname-basic-2552e365-9295-4890-980d-a066350e982d: Found 1 pods out of 1
Jul  1 12:19:47.431: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-2552e365-9295-4890-980d-a066350e982d" is running
Jul  1 12:19:47.434: INFO: Pod "my-hostname-basic-2552e365-9295-4890-980d-a066350e982d-pcjrc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 12:19:42 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 12:19:46 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 12:19:46 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 12:19:42 +0000 UTC Reason: Message:}])
Jul  1 12:19:47.434: INFO: Trying to dial the pod
Jul  1 12:19:52.456: INFO: Controller my-hostname-basic-2552e365-9295-4890-980d-a066350e982d: Got expected result from replica 1 [my-hostname-basic-2552e365-9295-4890-980d-a066350e982d-pcjrc]: "my-hostname-basic-2552e365-9295-4890-980d-a066350e982d-pcjrc", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:19:52.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9160" for this suite.

• [SLOW TEST:10.451 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":169,"skipped":2861,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:19:52.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul  1 12:19:52.593: INFO: Waiting up to 5m0s for pod "pod-ddc0f488-579c-470f-955f-8c2eb5444466" in namespace "emptydir-5149" to be "Succeeded or Failed"
Jul  1 12:19:52.640: INFO: Pod "pod-ddc0f488-579c-470f-955f-8c2eb5444466": Phase="Pending", Reason="", readiness=false. Elapsed: 47.198702ms
Jul  1 12:19:54.645: INFO: Pod "pod-ddc0f488-579c-470f-955f-8c2eb5444466": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052136134s
Jul  1 12:19:56.649: INFO: Pod "pod-ddc0f488-579c-470f-955f-8c2eb5444466": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056180115s
STEP: Saw pod success
Jul  1 12:19:56.649: INFO: Pod "pod-ddc0f488-579c-470f-955f-8c2eb5444466" satisfied condition "Succeeded or Failed"
Jul  1 12:19:56.652: INFO: Trying to get logs from node kali-worker pod pod-ddc0f488-579c-470f-955f-8c2eb5444466 container test-container: 
STEP: delete the pod
Jul  1 12:19:56.840: INFO: Waiting for pod pod-ddc0f488-579c-470f-955f-8c2eb5444466 to disappear
Jul  1 12:19:56.866: INFO: Pod pod-ddc0f488-579c-470f-955f-8c2eb5444466 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:19:56.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5149" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":2869,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:19:56.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  1 12:19:57.950: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  1 12:20:00.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202798, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202798, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202798, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202797, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:20:02.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202798, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202798, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202798, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202797, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  1 12:20:05.324: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 12:20:05.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-905-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:20:06.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3715" for this suite.
STEP: Destroying namespace "webhook-3715-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.138 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":171,"skipped":2886,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:20:07.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:20:08.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3696" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":2912,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}

------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:20:09.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul  1 12:20:10.979: INFO: Waiting up to 5m0s for pod "pod-1ff78327-8cc9-4b93-ad57-a591ac7a7918" in namespace "emptydir-2054" to be "Succeeded or Failed"
Jul  1 12:20:11.050: INFO: Pod "pod-1ff78327-8cc9-4b93-ad57-a591ac7a7918": Phase="Pending", Reason="", readiness=false. Elapsed: 70.729108ms
Jul  1 12:20:13.102: INFO: Pod "pod-1ff78327-8cc9-4b93-ad57-a591ac7a7918": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123435383s
Jul  1 12:20:15.299: INFO: Pod "pod-1ff78327-8cc9-4b93-ad57-a591ac7a7918": Phase="Running", Reason="", readiness=true. Elapsed: 4.32023801s
Jul  1 12:20:17.304: INFO: Pod "pod-1ff78327-8cc9-4b93-ad57-a591ac7a7918": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.32542688s
STEP: Saw pod success
Jul  1 12:20:17.304: INFO: Pod "pod-1ff78327-8cc9-4b93-ad57-a591ac7a7918" satisfied condition "Succeeded or Failed"
Jul  1 12:20:17.308: INFO: Trying to get logs from node kali-worker2 pod pod-1ff78327-8cc9-4b93-ad57-a591ac7a7918 container test-container: 
STEP: delete the pod
Jul  1 12:20:17.372: INFO: Waiting for pod pod-1ff78327-8cc9-4b93-ad57-a591ac7a7918 to disappear
Jul  1 12:20:17.376: INFO: Pod pod-1ff78327-8cc9-4b93-ad57-a591ac7a7918 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:20:17.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2054" for this suite.

• [SLOW TEST:7.752 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":173,"skipped":2912,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:20:17.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jul  1 12:20:18.182: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jul  1 12:20:20.422: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202818, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202818, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202818, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202818, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:20:22.425: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202818, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202818, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202818, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729202818, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  1 12:20:25.455: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 12:20:25.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:20:26.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-1930" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:9.467 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":174,"skipped":2923,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:20:26.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap that has name configmap-test-emptyKey-ac8c1f35-73f7-4290-91aa-a2b017373f92
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:20:26.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6740" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":175,"skipped":2928,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
S
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:20:26.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Jul  1 12:20:31.621: INFO: Successfully updated pod "annotationupdate2a8a5f3d-fc48-426c-8a64-2953ab7775b7"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:20:35.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6202" for this suite.

• [SLOW TEST:8.940 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":176,"skipped":2929,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:20:35.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with configMap that has name projected-configmap-test-upd-099b0c34-7a14-4b02-91d7-df827bd6bc9e
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-099b0c34-7a14-4b02-91d7-df827bd6bc9e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:20:42.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5735" for this suite.

• [SLOW TEST:6.945 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":177,"skipped":2962,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:20:42.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jul  1 12:20:49.966: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:20:51.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9286" for this suite.

• [SLOW TEST:8.530 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":178,"skipped":2980,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:20:51.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-4b903104-3835-4a80-a769-db175e14ff48
STEP: Creating a pod to test consume secrets
Jul  1 12:20:52.197: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c955e5dd-51d7-4708-b57d-312c032de570" in namespace "projected-3432" to be "Succeeded or Failed"
Jul  1 12:20:52.302: INFO: Pod "pod-projected-secrets-c955e5dd-51d7-4708-b57d-312c032de570": Phase="Pending", Reason="", readiness=false. Elapsed: 104.666625ms
Jul  1 12:20:54.329: INFO: Pod "pod-projected-secrets-c955e5dd-51d7-4708-b57d-312c032de570": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132210488s
Jul  1 12:20:56.333: INFO: Pod "pod-projected-secrets-c955e5dd-51d7-4708-b57d-312c032de570": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136082198s
Jul  1 12:20:58.342: INFO: Pod "pod-projected-secrets-c955e5dd-51d7-4708-b57d-312c032de570": Phase="Running", Reason="", readiness=true. Elapsed: 6.144519687s
Jul  1 12:21:00.346: INFO: Pod "pod-projected-secrets-c955e5dd-51d7-4708-b57d-312c032de570": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.148381893s
STEP: Saw pod success
Jul  1 12:21:00.346: INFO: Pod "pod-projected-secrets-c955e5dd-51d7-4708-b57d-312c032de570" satisfied condition "Succeeded or Failed"
Jul  1 12:21:00.348: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-c955e5dd-51d7-4708-b57d-312c032de570 container projected-secret-volume-test: 
STEP: delete the pod
Jul  1 12:21:00.593: INFO: Waiting for pod pod-projected-secrets-c955e5dd-51d7-4708-b57d-312c032de570 to disappear
Jul  1 12:21:00.635: INFO: Pod pod-projected-secrets-c955e5dd-51d7-4708-b57d-312c032de570 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:21:00.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3432" for this suite.

• [SLOW TEST:9.289 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":179,"skipped":2982,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:21:00.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-98dbba81-03f2-4496-af65-cbed50b65305 in namespace container-probe-3867
Jul  1 12:21:04.808: INFO: Started pod liveness-98dbba81-03f2-4496-af65-cbed50b65305 in namespace container-probe-3867
STEP: checking the pod's current state and verifying that restartCount is present
Jul  1 12:21:04.811: INFO: Initial restart count of pod liveness-98dbba81-03f2-4496-af65-cbed50b65305 is 0
Jul  1 12:21:24.928: INFO: Restart count of pod container-probe-3867/liveness-98dbba81-03f2-4496-af65-cbed50b65305 is now 1 (20.116754014s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:21:24.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3867" for this suite.

• [SLOW TEST:24.370 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":180,"skipped":3009,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:21:25.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Jul  1 12:21:25.081: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  1 12:21:25.328: INFO: Waiting for terminating namespaces to be deleted...
Jul  1 12:21:25.486: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Jul  1 12:21:25.495: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jul  1 12:21:25.495: INFO: 	Container kindnet-cni ready: true, restart count 7
Jul  1 12:21:25.495: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jul  1 12:21:25.495: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  1 12:21:25.495: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Jul  1 12:21:25.499: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jul  1 12:21:25.499: INFO: 	Container kindnet-cni ready: true, restart count 5
Jul  1 12:21:25.499: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jul  1 12:21:25.499: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-c1acec46-a09c-4be7-874f-08ddebc458b1 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-c1acec46-a09c-4be7-874f-08ddebc458b1 off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-c1acec46-a09c-4be7-874f-08ddebc458b1
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:21:43.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3521" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:18.959 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":181,"skipped":3028,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:21:43.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 12:21:44.089: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jul  1 12:21:44.116: INFO: Number of nodes with available pods: 0
Jul  1 12:21:44.116: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jul  1 12:21:44.206: INFO: Number of nodes with available pods: 0
Jul  1 12:21:44.206: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:21:45.211: INFO: Number of nodes with available pods: 0
Jul  1 12:21:45.211: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:21:46.212: INFO: Number of nodes with available pods: 0
Jul  1 12:21:46.212: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:21:47.214: INFO: Number of nodes with available pods: 0
Jul  1 12:21:47.214: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:21:48.211: INFO: Number of nodes with available pods: 1
Jul  1 12:21:48.211: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jul  1 12:21:48.276: INFO: Number of nodes with available pods: 1
Jul  1 12:21:48.276: INFO: Number of running nodes: 0, number of available pods: 1
Jul  1 12:21:49.338: INFO: Number of nodes with available pods: 0
Jul  1 12:21:49.338: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jul  1 12:21:49.428: INFO: Number of nodes with available pods: 0
Jul  1 12:21:49.428: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:21:50.432: INFO: Number of nodes with available pods: 0
Jul  1 12:21:50.432: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:21:51.433: INFO: Number of nodes with available pods: 0
Jul  1 12:21:51.433: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:21:52.432: INFO: Number of nodes with available pods: 0
Jul  1 12:21:52.432: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:21:53.433: INFO: Number of nodes with available pods: 0
Jul  1 12:21:53.433: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:21:54.433: INFO: Number of nodes with available pods: 0
Jul  1 12:21:54.433: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:21:55.434: INFO: Number of nodes with available pods: 0
Jul  1 12:21:55.434: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:21:56.433: INFO: Number of nodes with available pods: 0
Jul  1 12:21:56.433: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:21:57.432: INFO: Number of nodes with available pods: 0
Jul  1 12:21:57.432: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:21:58.432: INFO: Number of nodes with available pods: 0
Jul  1 12:21:58.432: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:21:59.432: INFO: Number of nodes with available pods: 0
Jul  1 12:21:59.432: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:22:00.433: INFO: Number of nodes with available pods: 0
Jul  1 12:22:00.433: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:22:01.433: INFO: Number of nodes with available pods: 0
Jul  1 12:22:01.433: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:22:02.432: INFO: Number of nodes with available pods: 0
Jul  1 12:22:02.432: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:22:03.516: INFO: Number of nodes with available pods: 0
Jul  1 12:22:03.516: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:22:04.432: INFO: Number of nodes with available pods: 0
Jul  1 12:22:04.432: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:22:06.009: INFO: Number of nodes with available pods: 0
Jul  1 12:22:06.009: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:22:06.433: INFO: Number of nodes with available pods: 0
Jul  1 12:22:06.433: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:22:07.433: INFO: Number of nodes with available pods: 0
Jul  1 12:22:07.433: INFO: Node kali-worker is running more than one daemon pod
Jul  1 12:22:08.433: INFO: Number of nodes with available pods: 1
Jul  1 12:22:08.433: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1389, will wait for the garbage collector to delete the pods
Jul  1 12:22:08.499: INFO: Deleting DaemonSet.extensions daemon-set took: 6.88468ms
Jul  1 12:22:08.799: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.250227ms
Jul  1 12:22:23.802: INFO: Number of nodes with available pods: 0
Jul  1 12:22:23.802: INFO: Number of running nodes: 0, number of available pods: 0
Jul  1 12:22:23.805: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1389/daemonsets","resourceVersion":"16803441"},"items":null}

Jul  1 12:22:23.812: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1389/pods","resourceVersion":"16803442"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:22:23.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1389" for this suite.

• [SLOW TEST:39.881 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":182,"skipped":3031,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:22:23.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Jul  1 12:22:23.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9685'
Jul  1 12:22:27.129: INFO: stderr: ""
Jul  1 12:22:27.129: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  1 12:22:27.129: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9685'
Jul  1 12:22:27.262: INFO: stderr: ""
Jul  1 12:22:27.262: INFO: stdout: "update-demo-nautilus-2d72v update-demo-nautilus-ql9s6 "
Jul  1 12:22:27.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2d72v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9685'
Jul  1 12:22:27.399: INFO: stderr: ""
Jul  1 12:22:27.399: INFO: stdout: ""
Jul  1 12:22:27.399: INFO: update-demo-nautilus-2d72v is created but not running
Jul  1 12:22:32.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9685'
Jul  1 12:22:32.842: INFO: stderr: ""
Jul  1 12:22:32.842: INFO: stdout: "update-demo-nautilus-2d72v update-demo-nautilus-ql9s6 "
Jul  1 12:22:32.843: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2d72v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9685'
Jul  1 12:22:33.010: INFO: stderr: ""
Jul  1 12:22:33.010: INFO: stdout: ""
Jul  1 12:22:33.010: INFO: update-demo-nautilus-2d72v is created but not running
Jul  1 12:22:38.010: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9685'
Jul  1 12:22:38.124: INFO: stderr: ""
Jul  1 12:22:38.124: INFO: stdout: "update-demo-nautilus-2d72v update-demo-nautilus-ql9s6 "
Jul  1 12:22:38.124: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2d72v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9685'
Jul  1 12:22:38.233: INFO: stderr: ""
Jul  1 12:22:38.233: INFO: stdout: "true"
Jul  1 12:22:38.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2d72v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9685'
Jul  1 12:22:38.321: INFO: stderr: ""
Jul  1 12:22:38.321: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  1 12:22:38.321: INFO: validating pod update-demo-nautilus-2d72v
Jul  1 12:22:38.356: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  1 12:22:38.356: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  1 12:22:38.356: INFO: update-demo-nautilus-2d72v is verified up and running
Jul  1 12:22:38.356: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ql9s6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9685'
Jul  1 12:22:38.451: INFO: stderr: ""
Jul  1 12:22:38.451: INFO: stdout: "true"
Jul  1 12:22:38.451: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ql9s6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9685'
Jul  1 12:22:38.545: INFO: stderr: ""
Jul  1 12:22:38.545: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  1 12:22:38.545: INFO: validating pod update-demo-nautilus-ql9s6
Jul  1 12:22:38.550: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  1 12:22:38.550: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  1 12:22:38.550: INFO: update-demo-nautilus-ql9s6 is verified up and running
STEP: scaling down the replication controller
Jul  1 12:22:38.581: INFO: scanned /root for discovery docs: 
Jul  1 12:22:38.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9685'
Jul  1 12:22:39.776: INFO: stderr: ""
Jul  1 12:22:39.776: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  1 12:22:39.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9685'
Jul  1 12:22:39.873: INFO: stderr: ""
Jul  1 12:22:39.873: INFO: stdout: "update-demo-nautilus-2d72v update-demo-nautilus-ql9s6 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul  1 12:22:44.874: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9685'
Jul  1 12:22:44.970: INFO: stderr: ""
Jul  1 12:22:44.970: INFO: stdout: "update-demo-nautilus-ql9s6 "
Jul  1 12:22:44.970: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ql9s6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9685'
Jul  1 12:22:45.056: INFO: stderr: ""
Jul  1 12:22:45.056: INFO: stdout: "true"
Jul  1 12:22:45.056: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ql9s6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9685'
Jul  1 12:22:45.159: INFO: stderr: ""
Jul  1 12:22:45.159: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  1 12:22:45.159: INFO: validating pod update-demo-nautilus-ql9s6
Jul  1 12:22:45.162: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  1 12:22:45.162: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  1 12:22:45.162: INFO: update-demo-nautilus-ql9s6 is verified up and running
STEP: scaling up the replication controller
Jul  1 12:22:45.163: INFO: scanned /root for discovery docs: 
Jul  1 12:22:45.163: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9685'
Jul  1 12:22:46.286: INFO: stderr: ""
Jul  1 12:22:46.286: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  1 12:22:46.286: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9685'
Jul  1 12:22:46.382: INFO: stderr: ""
Jul  1 12:22:46.382: INFO: stdout: "update-demo-nautilus-9hz4l update-demo-nautilus-ql9s6 "
Jul  1 12:22:46.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9hz4l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9685'
Jul  1 12:22:46.472: INFO: stderr: ""
Jul  1 12:22:46.472: INFO: stdout: ""
Jul  1 12:22:46.472: INFO: update-demo-nautilus-9hz4l is created but not running
Jul  1 12:22:51.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9685'
Jul  1 12:22:51.584: INFO: stderr: ""
Jul  1 12:22:51.584: INFO: stdout: "update-demo-nautilus-9hz4l update-demo-nautilus-ql9s6 "
Jul  1 12:22:51.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9hz4l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9685'
Jul  1 12:22:51.679: INFO: stderr: ""
Jul  1 12:22:51.679: INFO: stdout: "true"
Jul  1 12:22:51.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9hz4l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9685'
Jul  1 12:22:51.768: INFO: stderr: ""
Jul  1 12:22:51.768: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  1 12:22:51.768: INFO: validating pod update-demo-nautilus-9hz4l
Jul  1 12:22:51.785: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  1 12:22:51.785: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  1 12:22:51.785: INFO: update-demo-nautilus-9hz4l is verified up and running
Jul  1 12:22:51.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ql9s6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9685'
Jul  1 12:22:51.871: INFO: stderr: ""
Jul  1 12:22:51.871: INFO: stdout: "true"
Jul  1 12:22:51.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ql9s6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9685'
Jul  1 12:22:51.962: INFO: stderr: ""
Jul  1 12:22:51.962: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  1 12:22:51.962: INFO: validating pod update-demo-nautilus-ql9s6
Jul  1 12:22:51.966: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  1 12:22:51.966: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  1 12:22:51.966: INFO: update-demo-nautilus-ql9s6 is verified up and running
STEP: using delete to clean up resources
Jul  1 12:22:51.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9685'
Jul  1 12:22:52.069: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  1 12:22:52.069: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul  1 12:22:52.069: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9685'
Jul  1 12:22:52.172: INFO: stderr: "No resources found in kubectl-9685 namespace.\n"
Jul  1 12:22:52.172: INFO: stdout: ""
Jul  1 12:22:52.172: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9685 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  1 12:22:52.274: INFO: stderr: ""
Jul  1 12:22:52.274: INFO: stdout: "update-demo-nautilus-9hz4l\nupdate-demo-nautilus-ql9s6\n"
Jul  1 12:22:52.774: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9685'
Jul  1 12:22:52.945: INFO: stderr: "No resources found in kubectl-9685 namespace.\n"
Jul  1 12:22:52.945: INFO: stdout: ""
Jul  1 12:22:52.945: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9685 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  1 12:22:53.043: INFO: stderr: ""
Jul  1 12:22:53.043: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:22:53.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9685" for this suite.

• [SLOW TEST:29.218 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":275,"completed":183,"skipped":3060,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:22:53.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-cgxjs in namespace proxy-7926
I0701 12:22:53.593081       7 runners.go:190] Created replication controller with name: proxy-service-cgxjs, namespace: proxy-7926, replica count: 1
I0701 12:22:54.643751       7 runners.go:190] proxy-service-cgxjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0701 12:22:55.643971       7 runners.go:190] proxy-service-cgxjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0701 12:22:56.644198       7 runners.go:190] proxy-service-cgxjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0701 12:22:57.644456       7 runners.go:190] proxy-service-cgxjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0701 12:22:58.644732       7 runners.go:190] proxy-service-cgxjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0701 12:22:59.644968       7 runners.go:190] proxy-service-cgxjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0701 12:23:00.645385       7 runners.go:190] proxy-service-cgxjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0701 12:23:01.645649       7 runners.go:190] proxy-service-cgxjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0701 12:23:02.645918       7 runners.go:190] proxy-service-cgxjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0701 12:23:03.646233       7 runners.go:190] proxy-service-cgxjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0701 12:23:04.646425       7 runners.go:190] proxy-service-cgxjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0701 12:23:05.646667       7 runners.go:190] proxy-service-cgxjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0701 12:23:06.646960       7 runners.go:190] proxy-service-cgxjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0701 12:23:07.647169       7 runners.go:190] proxy-service-cgxjs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  1 12:23:07.652: INFO: setup took 14.489709175s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jul  1 12:23:07.657: INFO: (0) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:1080/proxy/: ... (200; 4.858131ms)
Jul  1 12:23:07.657: INFO: (0) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:1080/proxy/: test<... (200; 4.77798ms)
Jul  1 12:23:07.657: INFO: (0) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 5.434147ms)
Jul  1 12:23:07.657: INFO: (0) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 5.338048ms)
Jul  1 12:23:07.659: INFO: (0) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 6.759989ms)
Jul  1 12:23:07.662: INFO: (0) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 9.817196ms)
Jul  1 12:23:07.662: INFO: (0) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname2/proxy/: bar (200; 9.766034ms)
Jul  1 12:23:07.662: INFO: (0) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname2/proxy/: bar (200; 9.803663ms)
Jul  1 12:23:07.662: INFO: (0) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls/proxy/: test (200; 10.031395ms)
Jul  1 12:23:07.662: INFO: (0) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname1/proxy/: foo (200; 9.952698ms)
Jul  1 12:23:07.662: INFO: (0) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname1/proxy/: foo (200; 9.969907ms)
Jul  1 12:23:07.665: INFO: (0) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname2/proxy/: tls qux (200; 13.131064ms)
Jul  1 12:23:07.665: INFO: (0) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:443/proxy/: test<... (200; 5.695875ms)
Jul  1 12:23:07.680: INFO: (1) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls/proxy/: test (200; 6.169473ms)
Jul  1 12:23:07.680: INFO: (1) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:1080/proxy/: ... (200; 6.060973ms)
Jul  1 12:23:07.680: INFO: (1) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname2/proxy/: bar (200; 6.133114ms)
Jul  1 12:23:07.680: INFO: (1) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:443/proxy/: test (200; 4.132055ms)
Jul  1 12:23:07.685: INFO: (2) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:462/proxy/: tls qux (200; 4.364534ms)
Jul  1 12:23:07.685: INFO: (2) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:1080/proxy/: test<... (200; 4.512545ms)
Jul  1 12:23:07.685: INFO: (2) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 4.513566ms)
Jul  1 12:23:07.685: INFO: (2) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:443/proxy/: ... (200; 6.101171ms)
Jul  1 12:23:07.687: INFO: (2) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 6.343754ms)
Jul  1 12:23:07.687: INFO: (2) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname2/proxy/: bar (200; 6.438772ms)
Jul  1 12:23:07.687: INFO: (2) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname1/proxy/: foo (200; 6.43386ms)
Jul  1 12:23:07.687: INFO: (2) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:460/proxy/: tls baz (200; 6.388431ms)
Jul  1 12:23:07.687: INFO: (2) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 6.467061ms)
Jul  1 12:23:07.687: INFO: (2) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname1/proxy/: tls baz (200; 6.554556ms)
Jul  1 12:23:07.687: INFO: (2) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname2/proxy/: bar (200; 6.591497ms)
Jul  1 12:23:07.687: INFO: (2) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname2/proxy/: tls qux (200; 6.661456ms)
Jul  1 12:23:07.690: INFO: (3) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 2.920388ms)
Jul  1 12:23:07.690: INFO: (3) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:1080/proxy/: ... (200; 3.253411ms)
Jul  1 12:23:07.691: INFO: (3) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 3.398092ms)
Jul  1 12:23:07.697: INFO: (3) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:460/proxy/: tls baz (200; 9.695227ms)
Jul  1 12:23:07.697: INFO: (3) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:462/proxy/: tls qux (200; 9.806003ms)
Jul  1 12:23:07.697: INFO: (3) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls/proxy/: test (200; 9.776864ms)
Jul  1 12:23:07.697: INFO: (3) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:1080/proxy/: test<... (200; 9.878952ms)
Jul  1 12:23:07.698: INFO: (3) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname2/proxy/: bar (200; 10.559343ms)
Jul  1 12:23:07.698: INFO: (3) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 10.623827ms)
Jul  1 12:23:07.698: INFO: (3) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 10.667953ms)
Jul  1 12:23:07.698: INFO: (3) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname1/proxy/: tls baz (200; 10.727396ms)
Jul  1 12:23:07.698: INFO: (3) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:443/proxy/: test<... (200; 6.504434ms)
Jul  1 12:23:07.733: INFO: (4) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 6.595263ms)
Jul  1 12:23:07.733: INFO: (4) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname2/proxy/: bar (200; 6.80057ms)
Jul  1 12:23:07.733: INFO: (4) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls/proxy/: test (200; 6.901453ms)
Jul  1 12:23:07.733: INFO: (4) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname2/proxy/: tls qux (200; 6.888199ms)
Jul  1 12:23:07.733: INFO: (4) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname1/proxy/: foo (200; 7.205784ms)
Jul  1 12:23:07.734: INFO: (4) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:1080/proxy/: ... (200; 7.168009ms)
Jul  1 12:23:07.734: INFO: (4) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname1/proxy/: foo (200; 7.227612ms)
Jul  1 12:23:07.734: INFO: (4) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname1/proxy/: tls baz (200; 7.287175ms)
Jul  1 12:23:07.734: INFO: (4) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 7.481746ms)
Jul  1 12:23:07.734: INFO: (4) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:460/proxy/: tls baz (200; 7.460523ms)
Jul  1 12:23:07.738: INFO: (5) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:462/proxy/: tls qux (200; 4.085652ms)
Jul  1 12:23:07.738: INFO: (5) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 4.061619ms)
Jul  1 12:23:07.738: INFO: (5) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 4.377591ms)
Jul  1 12:23:07.738: INFO: (5) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 4.31009ms)
Jul  1 12:23:07.739: INFO: (5) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:1080/proxy/: test<... (200; 4.768039ms)
Jul  1 12:23:07.739: INFO: (5) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:460/proxy/: tls baz (200; 4.840645ms)
Jul  1 12:23:07.739: INFO: (5) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls/proxy/: test (200; 4.942116ms)
Jul  1 12:23:07.739: INFO: (5) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 4.845061ms)
Jul  1 12:23:07.739: INFO: (5) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:443/proxy/: ... (200; 6.314747ms)
Jul  1 12:23:07.741: INFO: (5) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname2/proxy/: bar (200; 7.116449ms)
Jul  1 12:23:07.741: INFO: (5) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname2/proxy/: bar (200; 7.200851ms)
Jul  1 12:23:07.741: INFO: (5) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname1/proxy/: foo (200; 7.240249ms)
Jul  1 12:23:07.741: INFO: (5) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname1/proxy/: foo (200; 7.166177ms)
Jul  1 12:23:07.745: INFO: (6) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls/proxy/: test (200; 3.643317ms)
Jul  1 12:23:07.745: INFO: (6) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 3.883895ms)
Jul  1 12:23:07.745: INFO: (6) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:462/proxy/: tls qux (200; 3.852982ms)
Jul  1 12:23:07.745: INFO: (6) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 3.930173ms)
Jul  1 12:23:07.745: INFO: (6) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:1080/proxy/: ... (200; 3.876965ms)
Jul  1 12:23:07.745: INFO: (6) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 3.980705ms)
Jul  1 12:23:07.745: INFO: (6) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:1080/proxy/: test<... (200; 3.900854ms)
Jul  1 12:23:07.745: INFO: (6) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 4.173163ms)
Jul  1 12:23:07.746: INFO: (6) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:460/proxy/: tls baz (200; 4.256173ms)
Jul  1 12:23:07.746: INFO: (6) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:443/proxy/: ... (200; 4.514059ms)
Jul  1 12:23:07.752: INFO: (7) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname2/proxy/: tls qux (200; 4.48721ms)
Jul  1 12:23:07.752: INFO: (7) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:1080/proxy/: test<... (200; 4.530322ms)
Jul  1 12:23:07.752: INFO: (7) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 4.816047ms)
Jul  1 12:23:07.752: INFO: (7) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:460/proxy/: tls baz (200; 4.873179ms)
Jul  1 12:23:07.752: INFO: (7) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls/proxy/: test (200; 4.908456ms)
Jul  1 12:23:07.752: INFO: (7) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 4.917355ms)
Jul  1 12:23:07.752: INFO: (7) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:443/proxy/: ... (200; 4.059721ms)
Jul  1 12:23:07.758: INFO: (8) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 4.203319ms)
Jul  1 12:23:07.758: INFO: (8) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:1080/proxy/: test<... (200; 4.186019ms)
Jul  1 12:23:07.758: INFO: (8) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:443/proxy/: test (200; 4.528457ms)
Jul  1 12:23:07.759: INFO: (8) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:462/proxy/: tls qux (200; 4.723297ms)
Jul  1 12:23:07.761: INFO: (8) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname2/proxy/: bar (200; 6.853363ms)
Jul  1 12:23:07.761: INFO: (8) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname1/proxy/: foo (200; 6.936302ms)
Jul  1 12:23:07.761: INFO: (8) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname2/proxy/: bar (200; 6.908573ms)
Jul  1 12:23:07.761: INFO: (8) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname1/proxy/: tls baz (200; 6.847944ms)
Jul  1 12:23:07.761: INFO: (8) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname1/proxy/: foo (200; 6.904543ms)
Jul  1 12:23:07.761: INFO: (8) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname2/proxy/: tls qux (200; 7.076768ms)
Jul  1 12:23:07.767: INFO: (9) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls/proxy/: test (200; 6.080533ms)
Jul  1 12:23:07.767: INFO: (9) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 6.112203ms)
Jul  1 12:23:07.767: INFO: (9) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 6.125905ms)
Jul  1 12:23:07.767: INFO: (9) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 6.137437ms)
Jul  1 12:23:07.767: INFO: (9) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:1080/proxy/: ... (200; 6.206657ms)
Jul  1 12:23:07.767: INFO: (9) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:462/proxy/: tls qux (200; 6.238795ms)
Jul  1 12:23:07.767: INFO: (9) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 6.186738ms)
Jul  1 12:23:07.767: INFO: (9) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:1080/proxy/: test<... (200; 6.167097ms)
Jul  1 12:23:07.767: INFO: (9) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:460/proxy/: tls baz (200; 6.154469ms)
Jul  1 12:23:07.767: INFO: (9) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:443/proxy/: test<... (200; 2.968523ms)
Jul  1 12:23:07.773: INFO: (10) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname2/proxy/: tls qux (200; 4.912479ms)
Jul  1 12:23:07.773: INFO: (10) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname1/proxy/: tls baz (200; 5.113919ms)
Jul  1 12:23:07.773: INFO: (10) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 5.148488ms)
Jul  1 12:23:07.773: INFO: (10) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname2/proxy/: bar (200; 5.18549ms)
Jul  1 12:23:07.773: INFO: (10) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 5.180506ms)
Jul  1 12:23:07.773: INFO: (10) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname1/proxy/: foo (200; 5.795218ms)
Jul  1 12:23:07.773: INFO: (10) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 5.791833ms)
Jul  1 12:23:07.773: INFO: (10) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:462/proxy/: tls qux (200; 5.861344ms)
Jul  1 12:23:07.773: INFO: (10) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 5.803134ms)
Jul  1 12:23:07.773: INFO: (10) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:443/proxy/: test (200; 5.811151ms)
Jul  1 12:23:07.773: INFO: (10) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname1/proxy/: foo (200; 5.894425ms)
Jul  1 12:23:07.774: INFO: (10) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname2/proxy/: bar (200; 5.858183ms)
Jul  1 12:23:07.774: INFO: (10) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:1080/proxy/: ... (200; 5.879543ms)
Jul  1 12:23:07.776: INFO: (11) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 2.173458ms)
Jul  1 12:23:07.783: INFO: (11) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls/proxy/: test (200; 9.269321ms)
Jul  1 12:23:07.783: INFO: (11) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 9.290979ms)
Jul  1 12:23:07.783: INFO: (11) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 9.343201ms)
Jul  1 12:23:07.783: INFO: (11) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 9.330621ms)
Jul  1 12:23:07.783: INFO: (11) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:443/proxy/: test<... (200; 9.433554ms)
Jul  1 12:23:07.783: INFO: (11) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname1/proxy/: tls baz (200; 9.397141ms)
Jul  1 12:23:07.783: INFO: (11) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname1/proxy/: foo (200; 9.396597ms)
Jul  1 12:23:07.783: INFO: (11) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname2/proxy/: bar (200; 9.353534ms)
Jul  1 12:23:07.783: INFO: (11) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname1/proxy/: foo (200; 9.429814ms)
Jul  1 12:23:07.783: INFO: (11) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:1080/proxy/: ... (200; 9.417634ms)
Jul  1 12:23:07.783: INFO: (11) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname2/proxy/: tls qux (200; 9.854894ms)
Jul  1 12:23:07.784: INFO: (11) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:462/proxy/: tls qux (200; 9.938095ms)
Jul  1 12:23:07.790: INFO: (12) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:1080/proxy/: ... (200; 6.127582ms)
Jul  1 12:23:07.790: INFO: (12) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:460/proxy/: tls baz (200; 6.101893ms)
Jul  1 12:23:07.791: INFO: (12) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls/proxy/: test (200; 7.658664ms)
Jul  1 12:23:07.792: INFO: (12) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 8.46142ms)
Jul  1 12:23:07.792: INFO: (12) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:462/proxy/: tls qux (200; 8.569374ms)
Jul  1 12:23:07.792: INFO: (12) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 8.578158ms)
Jul  1 12:23:07.793: INFO: (12) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:1080/proxy/: test<... (200; 9.135299ms)
Jul  1 12:23:07.793: INFO: (12) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 8.888097ms)
Jul  1 12:23:07.793: INFO: (12) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:443/proxy/: test (200; 4.356592ms)
Jul  1 12:23:07.799: INFO: (13) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:1080/proxy/: test<... (200; 4.328407ms)
Jul  1 12:23:07.799: INFO: (13) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 4.382891ms)
Jul  1 12:23:07.799: INFO: (13) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:462/proxy/: tls qux (200; 4.333227ms)
Jul  1 12:23:07.799: INFO: (13) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 4.341819ms)
Jul  1 12:23:07.799: INFO: (13) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:460/proxy/: tls baz (200; 4.524595ms)
Jul  1 12:23:07.799: INFO: (13) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:443/proxy/: ... (200; 6.589443ms)
Jul  1 12:23:07.805: INFO: (14) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:443/proxy/: ... (200; 5.084934ms)
Jul  1 12:23:07.806: INFO: (14) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:462/proxy/: tls qux (200; 5.07847ms)
Jul  1 12:23:07.806: INFO: (14) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname1/proxy/: tls baz (200; 5.092008ms)
Jul  1 12:23:07.806: INFO: (14) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 5.09828ms)
Jul  1 12:23:07.807: INFO: (14) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:1080/proxy/: test<... (200; 5.475832ms)
Jul  1 12:23:07.807: INFO: (14) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 5.461505ms)
Jul  1 12:23:07.807: INFO: (14) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls/proxy/: test (200; 5.470305ms)
Jul  1 12:23:07.807: INFO: (14) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname1/proxy/: foo (200; 5.466275ms)
Jul  1 12:23:07.807: INFO: (14) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:460/proxy/: tls baz (200; 5.779383ms)
Jul  1 12:23:07.807: INFO: (14) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname2/proxy/: tls qux (200; 5.872342ms)
Jul  1 12:23:07.811: INFO: (15) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname2/proxy/: bar (200; 3.799651ms)
Jul  1 12:23:07.811: INFO: (15) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname2/proxy/: bar (200; 4.011311ms)
Jul  1 12:23:07.811: INFO: (15) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls/proxy/: test (200; 4.006245ms)
Jul  1 12:23:07.811: INFO: (15) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:443/proxy/: ... (200; 4.83462ms)
Jul  1 12:23:07.812: INFO: (15) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:462/proxy/: tls qux (200; 4.886684ms)
Jul  1 12:23:07.812: INFO: (15) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname2/proxy/: tls qux (200; 5.063569ms)
Jul  1 12:23:07.812: INFO: (15) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:1080/proxy/: test<... (200; 5.099192ms)
Jul  1 12:23:07.812: INFO: (15) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 5.05463ms)
Jul  1 12:23:07.813: INFO: (15) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname1/proxy/: tls baz (200; 5.465165ms)
Jul  1 12:23:07.813: INFO: (15) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname1/proxy/: foo (200; 5.527508ms)
Jul  1 12:23:07.815: INFO: (16) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:443/proxy/: test (200; 4.422267ms)
Jul  1 12:23:07.817: INFO: (16) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:460/proxy/: tls baz (200; 4.424522ms)
Jul  1 12:23:07.817: INFO: (16) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:1080/proxy/: ... (200; 4.438656ms)
Jul  1 12:23:07.818: INFO: (16) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 4.63871ms)
Jul  1 12:23:07.818: INFO: (16) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname1/proxy/: foo (200; 5.118352ms)
Jul  1 12:23:07.818: INFO: (16) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname1/proxy/: foo (200; 5.307238ms)
Jul  1 12:23:07.818: INFO: (16) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname2/proxy/: bar (200; 5.416813ms)
Jul  1 12:23:07.818: INFO: (16) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname2/proxy/: bar (200; 5.452338ms)
Jul  1 12:23:07.819: INFO: (16) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:1080/proxy/: test<... (200; 5.659819ms)
Jul  1 12:23:07.819: INFO: (16) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname2/proxy/: tls qux (200; 5.712722ms)
Jul  1 12:23:07.819: INFO: (16) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:462/proxy/: tls qux (200; 5.786722ms)
Jul  1 12:23:07.819: INFO: (16) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname1/proxy/: tls baz (200; 5.777444ms)
Jul  1 12:23:07.819: INFO: (16) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 6.089976ms)
Jul  1 12:23:07.819: INFO: (16) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 6.135398ms)
Jul  1 12:23:07.821: INFO: (17) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:462/proxy/: tls qux (200; 2.129741ms)
Jul  1 12:23:07.822: INFO: (17) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 3.112705ms)
Jul  1 12:23:07.822: INFO: (17) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:1080/proxy/: ... (200; 3.207544ms)
Jul  1 12:23:07.824: INFO: (17) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls/proxy/: test (200; 5.012308ms)
Jul  1 12:23:07.825: INFO: (17) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 5.830752ms)
Jul  1 12:23:07.825: INFO: (17) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname1/proxy/: foo (200; 5.979976ms)
Jul  1 12:23:07.825: INFO: (17) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname2/proxy/: bar (200; 6.021927ms)
Jul  1 12:23:07.825: INFO: (17) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 6.168187ms)
Jul  1 12:23:07.825: INFO: (17) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:1080/proxy/: test<... (200; 6.189988ms)
Jul  1 12:23:07.825: INFO: (17) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname2/proxy/: bar (200; 6.232045ms)
Jul  1 12:23:07.825: INFO: (17) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname1/proxy/: foo (200; 6.204484ms)
Jul  1 12:23:07.826: INFO: (17) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname1/proxy/: tls baz (200; 6.497171ms)
Jul  1 12:23:07.826: INFO: (17) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:443/proxy/: test (200; 4.867439ms)
Jul  1 12:23:07.831: INFO: (18) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname2/proxy/: bar (200; 4.886422ms)
Jul  1 12:23:07.831: INFO: (18) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 4.959117ms)
Jul  1 12:23:07.831: INFO: (18) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:460/proxy/: tls baz (200; 4.970354ms)
Jul  1 12:23:07.831: INFO: (18) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 4.976464ms)
Jul  1 12:23:07.831: INFO: (18) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:462/proxy/: tls qux (200; 5.084407ms)
Jul  1 12:23:07.831: INFO: (18) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:443/proxy/: ... (200; 5.093339ms)
Jul  1 12:23:07.831: INFO: (18) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:1080/proxy/: test<... (200; 5.049156ms)
Jul  1 12:23:07.831: INFO: (18) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname1/proxy/: foo (200; 5.037153ms)
Jul  1 12:23:07.831: INFO: (18) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 5.165729ms)
Jul  1 12:23:07.831: INFO: (18) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname2/proxy/: bar (200; 5.134975ms)
Jul  1 12:23:07.835: INFO: (19) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 3.516683ms)
Jul  1 12:23:07.835: INFO: (19) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 4.093506ms)
Jul  1 12:23:07.835: INFO: (19) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname2/proxy/: bar (200; 4.227988ms)
Jul  1 12:23:07.835: INFO: (19) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname1/proxy/: foo (200; 4.33935ms)
Jul  1 12:23:07.835: INFO: (19) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:162/proxy/: bar (200; 4.411235ms)
Jul  1 12:23:07.836: INFO: (19) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:462/proxy/: tls qux (200; 4.566406ms)
Jul  1 12:23:07.836: INFO: (19) /api/v1/namespaces/proxy-7926/pods/http:proxy-service-cgxjs-jwwls:1080/proxy/: ... (200; 4.572941ms)
Jul  1 12:23:07.836: INFO: (19) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname2/proxy/: tls qux (200; 4.554101ms)
Jul  1 12:23:07.836: INFO: (19) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:1080/proxy/: test<... (200; 4.535077ms)
Jul  1 12:23:07.836: INFO: (19) /api/v1/namespaces/proxy-7926/pods/proxy-service-cgxjs-jwwls:160/proxy/: foo (200; 4.691775ms)
Jul  1 12:23:07.836: INFO: (19) /api/v1/namespaces/proxy-7926/services/https:proxy-service-cgxjs:tlsportname1/proxy/: tls baz (200; 4.775865ms)
Jul  1 12:23:07.836: INFO: (19) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:460/proxy/: tls baz (200; 4.819882ms)
Jul  1 12:23:07.836: INFO: (19) /api/v1/namespaces/proxy-7926/services/http:proxy-service-cgxjs:portname1/proxy/: foo (200; 4.853881ms)
Jul  1 12:23:07.836: INFO: (19) /api/v1/namespaces/proxy-7926/services/proxy-service-cgxjs:portname2/proxy/: bar (200; 5.097883ms)
Jul  1 12:23:07.836: INFO: (19) /api/v1/namespaces/proxy-7926/pods/https:proxy-service-cgxjs-jwwls:443/proxy/: test (200; 5.294181ms)
STEP: deleting ReplicationController proxy-service-cgxjs in namespace proxy-7926, will wait for the garbage collector to delete the pods
Jul  1 12:23:07.896: INFO: Deleting ReplicationController proxy-service-cgxjs took: 7.549772ms
Jul  1 12:23:07.996: INFO: Terminating ReplicationController proxy-service-cgxjs pods took: 100.219837ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:23:13.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7926" for this suite.

• [SLOW TEST:20.433 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":275,"completed":184,"skipped":3097,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:23:13.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-downwardapi-qqgz
STEP: Creating a pod to test atomic-volume-subpath
Jul  1 12:23:13.694: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-qqgz" in namespace "subpath-6165" to be "Succeeded or Failed"
Jul  1 12:23:13.701: INFO: Pod "pod-subpath-test-downwardapi-qqgz": Phase="Pending", Reason="", readiness=false. Elapsed: 7.816597ms
Jul  1 12:23:15.784: INFO: Pod "pod-subpath-test-downwardapi-qqgz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090773325s
Jul  1 12:23:17.792: INFO: Pod "pod-subpath-test-downwardapi-qqgz": Phase="Running", Reason="", readiness=true. Elapsed: 4.098510589s
Jul  1 12:23:19.796: INFO: Pod "pod-subpath-test-downwardapi-qqgz": Phase="Running", Reason="", readiness=true. Elapsed: 6.10285772s
Jul  1 12:23:21.801: INFO: Pod "pod-subpath-test-downwardapi-qqgz": Phase="Running", Reason="", readiness=true. Elapsed: 8.107687411s
Jul  1 12:23:23.809: INFO: Pod "pod-subpath-test-downwardapi-qqgz": Phase="Running", Reason="", readiness=true. Elapsed: 10.115003301s
Jul  1 12:23:25.812: INFO: Pod "pod-subpath-test-downwardapi-qqgz": Phase="Running", Reason="", readiness=true. Elapsed: 12.118628714s
Jul  1 12:23:27.816: INFO: Pod "pod-subpath-test-downwardapi-qqgz": Phase="Running", Reason="", readiness=true. Elapsed: 14.122560765s
Jul  1 12:23:29.820: INFO: Pod "pod-subpath-test-downwardapi-qqgz": Phase="Running", Reason="", readiness=true. Elapsed: 16.126742219s
Jul  1 12:23:31.825: INFO: Pod "pod-subpath-test-downwardapi-qqgz": Phase="Running", Reason="", readiness=true. Elapsed: 18.131381971s
Jul  1 12:23:33.830: INFO: Pod "pod-subpath-test-downwardapi-qqgz": Phase="Running", Reason="", readiness=true. Elapsed: 20.136000166s
Jul  1 12:23:35.834: INFO: Pod "pod-subpath-test-downwardapi-qqgz": Phase="Running", Reason="", readiness=true. Elapsed: 22.140475353s
Jul  1 12:23:37.841: INFO: Pod "pod-subpath-test-downwardapi-qqgz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.147251741s
STEP: Saw pod success
Jul  1 12:23:37.841: INFO: Pod "pod-subpath-test-downwardapi-qqgz" satisfied condition "Succeeded or Failed"
Jul  1 12:23:37.844: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-downwardapi-qqgz container test-container-subpath-downwardapi-qqgz: 
STEP: delete the pod
Jul  1 12:23:37.985: INFO: Waiting for pod pod-subpath-test-downwardapi-qqgz to disappear
Jul  1 12:23:37.988: INFO: Pod pod-subpath-test-downwardapi-qqgz no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-qqgz
Jul  1 12:23:37.988: INFO: Deleting pod "pod-subpath-test-downwardapi-qqgz" in namespace "subpath-6165"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:23:37.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6165" for this suite.

• [SLOW TEST:24.490 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":185,"skipped":3115,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:23:37.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-a6bfc19a-13cc-429b-af70-a90319d7234a
STEP: Creating a pod to test consume secrets
Jul  1 12:23:38.164: INFO: Waiting up to 5m0s for pod "pod-secrets-7f313dd3-2613-4810-8da1-d6c08f4988ea" in namespace "secrets-1122" to be "Succeeded or Failed"
Jul  1 12:23:38.180: INFO: Pod "pod-secrets-7f313dd3-2613-4810-8da1-d6c08f4988ea": Phase="Pending", Reason="", readiness=false. Elapsed: 15.454054ms
Jul  1 12:23:40.187: INFO: Pod "pod-secrets-7f313dd3-2613-4810-8da1-d6c08f4988ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022833538s
Jul  1 12:23:42.192: INFO: Pod "pod-secrets-7f313dd3-2613-4810-8da1-d6c08f4988ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027519965s
STEP: Saw pod success
Jul  1 12:23:42.192: INFO: Pod "pod-secrets-7f313dd3-2613-4810-8da1-d6c08f4988ea" satisfied condition "Succeeded or Failed"
Jul  1 12:23:42.194: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-7f313dd3-2613-4810-8da1-d6c08f4988ea container secret-volume-test: 
STEP: delete the pod
Jul  1 12:23:42.241: INFO: Waiting for pod pod-secrets-7f313dd3-2613-4810-8da1-d6c08f4988ea to disappear
Jul  1 12:23:42.246: INFO: Pod pod-secrets-7f313dd3-2613-4810-8da1-d6c08f4988ea no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:23:42.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1122" for this suite.
STEP: Destroying namespace "secret-namespace-8711" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3123,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:23:42.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating server pod server in namespace prestop-714
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-714
STEP: Deleting pre-stop pod
Jul  1 12:23:55.534: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:23:55.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-714" for this suite.

• [SLOW TEST:13.299 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":275,"completed":187,"skipped":3133,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:23:55.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Jul  1 12:23:55.914: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  1 12:23:55.963: INFO: Waiting for terminating namespaces to be deleted...
Jul  1 12:23:55.966: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Jul  1 12:23:55.983: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jul  1 12:23:55.984: INFO: 	Container kindnet-cni ready: true, restart count 7
Jul  1 12:23:55.984: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jul  1 12:23:55.984: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  1 12:23:55.984: INFO: server from prestop-714 started at 2020-07-01 12:23:42 +0000 UTC (1 container statuses recorded)
Jul  1 12:23:55.984: INFO: 	Container server ready: true, restart count 0
Jul  1 12:23:55.984: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Jul  1 12:23:55.989: INFO: tester from prestop-714 started at 2020-07-01 12:23:46 +0000 UTC (1 container statuses recorded)
Jul  1 12:23:55.989: INFO: 	Container tester ready: true, restart count 0
Jul  1 12:23:55.989: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jul  1 12:23:55.989: INFO: 	Container kindnet-cni ready: true, restart count 5
Jul  1 12:23:55.989: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jul  1 12:23:55.989: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: verifying the node has the label node kali-worker
STEP: verifying the node has the label node kali-worker2
Jul  1 12:23:56.089: INFO: Pod kindnet-f8plf requesting resource cpu=100m on Node kali-worker
Jul  1 12:23:56.089: INFO: Pod kindnet-mcdh2 requesting resource cpu=100m on Node kali-worker2
Jul  1 12:23:56.089: INFO: Pod kube-proxy-mmnb6 requesting resource cpu=0m on Node kali-worker2
Jul  1 12:23:56.089: INFO: Pod kube-proxy-vrswj requesting resource cpu=0m on Node kali-worker
Jul  1 12:23:56.089: INFO: Pod server requesting resource cpu=0m on Node kali-worker
Jul  1 12:23:56.089: INFO: Pod tester requesting resource cpu=0m on Node kali-worker2
STEP: Starting Pods to consume most of the cluster CPU.
Jul  1 12:23:56.089: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker
Jul  1 12:23:56.095: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-685c2c63-3084-4494-91a3-c45e7721a6af.161da06db9fc6df2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4577/filler-pod-685c2c63-3084-4494-91a3-c45e7721a6af to kali-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-685c2c63-3084-4494-91a3-c45e7721a6af.161da06e564dba80], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-685c2c63-3084-4494-91a3-c45e7721a6af.161da06e9c392419], Reason = [Created], Message = [Created container filler-pod-685c2c63-3084-4494-91a3-c45e7721a6af]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-685c2c63-3084-4494-91a3-c45e7721a6af.161da06eac39262f], Reason = [Started], Message = [Started container filler-pod-685c2c63-3084-4494-91a3-c45e7721a6af]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c817053b-8f49-464a-b67e-dc11d5895312.161da06db0a580af], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4577/filler-pod-c817053b-8f49-464a-b67e-dc11d5895312 to kali-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c817053b-8f49-464a-b67e-dc11d5895312.161da06e1182aa38], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c817053b-8f49-464a-b67e-dc11d5895312.161da06e4e103aaa], Reason = [Created], Message = [Created container filler-pod-c817053b-8f49-464a-b67e-dc11d5895312]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c817053b-8f49-464a-b67e-dc11d5895312.161da06e69ab0a3e], Reason = [Started], Message = [Started container filler-pod-c817053b-8f49-464a-b67e-dc11d5895312]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.161da06f247e36d0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.161da06f47de014b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node kali-worker2
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node kali-worker
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:24:03.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4577" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:7.918 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":275,"completed":188,"skipped":3152,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:24:03.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename limitrange
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a LimitRange
STEP: Setting up watch
STEP: Submitting a LimitRange
Jul  1 12:24:04.035: INFO: observed the limitRanges list
STEP: Verifying LimitRange creation was observed
STEP: Fetching the LimitRange to ensure it has proper values
Jul  1 12:24:04.038: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Jul  1 12:24:04.038: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with no resource requirements
STEP: Ensuring Pod has resource requirements applied from LimitRange
Jul  1 12:24:04.049: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Jul  1 12:24:04.049: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with partial resource requirements
STEP: Ensuring Pod has merged resource requirements applied from LimitRange
Jul  1 12:24:04.236: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
Jul  1 12:24:04.236: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Failing to create a Pod with less than min resources
STEP: Failing to create a Pod with more than max resources
STEP: Updating a LimitRange
STEP: Verifying LimitRange updating is effective
STEP: Creating a Pod with less than former min resources
STEP: Failing to create a Pod with more than max resources
STEP: Deleting a LimitRange
STEP: Verifying the LimitRange was deleted
Jul  1 12:24:12.493: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:24:12.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-1760" for this suite.

• [SLOW TEST:9.184 seconds]
[sig-scheduling] LimitRange
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":189,"skipped":3176,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:24:12.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-47a22a37-53ef-415e-9508-e66604ef92bb
STEP: Creating configMap with name cm-test-opt-upd-ab9a3ada-e15c-4623-98ca-5447cc0486f9
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-47a22a37-53ef-415e-9508-e66604ef92bb
STEP: Updating configmap cm-test-opt-upd-ab9a3ada-e15c-4623-98ca-5447cc0486f9
STEP: Creating configMap with name cm-test-opt-create-0c12d63e-6c2c-4dca-bae7-081a81fe6937
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:25:48.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5866" for this suite.

• [SLOW TEST:96.252 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3183,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:25:48.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Jul  1 12:25:49.043: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  1 12:25:49.069: INFO: Waiting for terminating namespaces to be deleted...
Jul  1 12:25:49.072: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Jul  1 12:25:49.095: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jul  1 12:25:49.095: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  1 12:25:49.095: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jul  1 12:25:49.095: INFO: 	Container kindnet-cni ready: true, restart count 7
Jul  1 12:25:49.095: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Jul  1 12:25:49.100: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jul  1 12:25:49.100: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  1 12:25:49.100: INFO: pod-projected-configmaps-a2907a53-ec4a-404a-8fff-3c920c561cd4 from projected-5866 started at 2020-07-01 12:24:13 +0000 UTC (3 container statuses recorded)
Jul  1 12:25:49.100: INFO: 	Container createcm-volume-test ready: true, restart count 0
Jul  1 12:25:49.100: INFO: 	Container delcm-volume-test ready: true, restart count 0
Jul  1 12:25:49.100: INFO: 	Container updcm-volume-test ready: true, restart count 0
Jul  1 12:25:49.100: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jul  1 12:25:49.100: INFO: 	Container kindnet-cni ready: true, restart count 5
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-c8eebeab-863d-4c15-9dbb-0397a582c310 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-c8eebeab-863d-4c15-9dbb-0397a582c310 off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-c8eebeab-863d-4c15-9dbb-0397a582c310
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:25:59.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2981" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:10.402 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":275,"completed":191,"skipped":3226,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:25:59.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul  1 12:26:00.194: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e306496b-df2c-4e4a-8278-adf8ce0c1d21" in namespace "projected-8640" to be "Succeeded or Failed"
Jul  1 12:26:00.399: INFO: Pod "downwardapi-volume-e306496b-df2c-4e4a-8278-adf8ce0c1d21": Phase="Pending", Reason="", readiness=false. Elapsed: 205.269792ms
Jul  1 12:26:02.403: INFO: Pod "downwardapi-volume-e306496b-df2c-4e4a-8278-adf8ce0c1d21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20963805s
Jul  1 12:26:04.446: INFO: Pod "downwardapi-volume-e306496b-df2c-4e4a-8278-adf8ce0c1d21": Phase="Running", Reason="", readiness=true. Elapsed: 4.252636705s
Jul  1 12:26:06.449: INFO: Pod "downwardapi-volume-e306496b-df2c-4e4a-8278-adf8ce0c1d21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.255484471s
STEP: Saw pod success
Jul  1 12:26:06.449: INFO: Pod "downwardapi-volume-e306496b-df2c-4e4a-8278-adf8ce0c1d21" satisfied condition "Succeeded or Failed"
Jul  1 12:26:06.451: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-e306496b-df2c-4e4a-8278-adf8ce0c1d21 container client-container: 
STEP: delete the pod
Jul  1 12:26:06.576: INFO: Waiting for pod downwardapi-volume-e306496b-df2c-4e4a-8278-adf8ce0c1d21 to disappear
Jul  1 12:26:06.586: INFO: Pod downwardapi-volume-e306496b-df2c-4e4a-8278-adf8ce0c1d21 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:26:06.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8640" for this suite.

• [SLOW TEST:7.229 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3259,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
S
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:26:06.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service endpoint-test2 in namespace services-959
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-959 to expose endpoints map[]
Jul  1 12:26:06.804: INFO: Get endpoints failed (3.739112ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jul  1 12:26:07.807: INFO: successfully validated that service endpoint-test2 in namespace services-959 exposes endpoints map[] (1.007343681s elapsed)
STEP: Creating pod pod1 in namespace services-959
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-959 to expose endpoints map[pod1:[80]]
Jul  1 12:26:12.051: INFO: successfully validated that service endpoint-test2 in namespace services-959 exposes endpoints map[pod1:[80]] (4.237118065s elapsed)
STEP: Creating pod pod2 in namespace services-959
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-959 to expose endpoints map[pod1:[80] pod2:[80]]
Jul  1 12:26:15.225: INFO: successfully validated that service endpoint-test2 in namespace services-959 exposes endpoints map[pod1:[80] pod2:[80]] (3.169858613s elapsed)
STEP: Deleting pod pod1 in namespace services-959
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-959 to expose endpoints map[pod2:[80]]
Jul  1 12:26:15.327: INFO: successfully validated that service endpoint-test2 in namespace services-959 exposes endpoints map[pod2:[80]] (98.339973ms elapsed)
STEP: Deleting pod pod2 in namespace services-959
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-959 to expose endpoints map[]
Jul  1 12:26:16.357: INFO: successfully validated that service endpoint-test2 in namespace services-959 exposes endpoints map[] (1.009644821s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:26:16.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-959" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:9.841 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":275,"completed":193,"skipped":3260,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:26:16.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 12:26:16.552: INFO: Creating deployment "webserver-deployment"
Jul  1 12:26:16.614: INFO: Waiting for observed generation 1
Jul  1 12:26:18.649: INFO: Waiting for all required pods to come up
Jul  1 12:26:18.653: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Jul  1 12:26:28.662: INFO: Waiting for deployment "webserver-deployment" to complete
Jul  1 12:26:28.668: INFO: Updating deployment "webserver-deployment" with a non-existent image
Jul  1 12:26:28.677: INFO: Updating deployment webserver-deployment
Jul  1 12:26:28.677: INFO: Waiting for observed generation 2
Jul  1 12:26:30.932: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jul  1 12:26:30.935: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jul  1 12:26:30.938: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jul  1 12:26:30.947: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jul  1 12:26:30.947: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jul  1 12:26:30.949: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jul  1 12:26:30.954: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Jul  1 12:26:30.954: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Jul  1 12:26:30.960: INFO: Updating deployment webserver-deployment
Jul  1 12:26:30.960: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Jul  1 12:26:31.319: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jul  1 12:26:31.608: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jul  1 12:26:31.949: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-8494 /apis/apps/v1/namespaces/deployment-8494/deployments/webserver-deployment 8c68cf56-025a-4dd3-9c3a-94675dc88287 16804925 3 2020-07-01 12:26:16 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-07-01 12:26:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002839888  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-07-01 12:26:29 +0000 UTC,LastTransitionTime:2020-07-01 12:26:16 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-01 12:26:31 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Jul  1 12:26:31.989: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4  deployment-8494 /apis/apps/v1/namespaces/deployment-8494/replicasets/webserver-deployment-6676bcd6d4 13e10148-60ff-4237-93b3-42d6b038959d 16804947 3 2020-07-01 12:26:28 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 8c68cf56-025a-4dd3-9c3a-94675dc88287 0xc003551da7 0xc003551da8}] []  [{kube-controller-manager Update apps/v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 99 54 56 99 102 53 54 45 48 50 53 97 45 52 100 100 51 45 57 99 51 97 45 57 52 54 55 53 100 99 56 56 50 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003551e28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul  1 12:26:31.989: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Jul  1 12:26:31.989: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797  deployment-8494 /apis/apps/v1/namespaces/deployment-8494/replicasets/webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 16804937 3 2020-07-01 12:26:16 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 8c68cf56-025a-4dd3-9c3a-94675dc88287 0xc003551e87 0xc003551e88}] []  [{kube-controller-manager Update apps/v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 99 54 56 99 102 53 54 45 48 50 53 97 45 52 100 100 51 45 57 99 51 97 45 57 52 54 55 53 100 99 56 56 50 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003551ef8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Jul  1 12:26:32.095: INFO: Pod "webserver-deployment-6676bcd6d4-4v4j7" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4v4j7 webserver-deployment-6676bcd6d4- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-6676bcd6d4-4v4j7 3dacb43d-c5a8-45a8-a8fd-00e809df9b9f 16804857 0 2020-07-01 12:26:28 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13e10148-60ff-4237-93b3-42d6b038959d 0xc003598677 0xc003598678}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 101 49 48 49 52 56 45 54 48 102 102 45 52 50 51 55 45 57 51 98 51 45 52 50 100 54 98 48 51 56 57 53 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 12:26:29 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-07-01 12:26:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.096: INFO: Pod "webserver-deployment-6676bcd6d4-598zb" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-598zb webserver-deployment-6676bcd6d4- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-6676bcd6d4-598zb 4b832fbd-2edf-41e9-afb8-81836314505e 16804908 0 2020-07-01 12:26:31 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13e10148-60ff-4237-93b3-42d6b038959d 0xc003598827 0xc003598828}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 101 49 48 49 52 56 45 54 48 102 102 45 52 50 51 55 45 57 51 98 51 45 52 50 100 54 98 48 51 56 57 53 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.096: INFO: Pod "webserver-deployment-6676bcd6d4-9jxcb" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-9jxcb webserver-deployment-6676bcd6d4- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-6676bcd6d4-9jxcb 6ffdc4ff-7c03-4645-add0-8d674e438a8b 16804906 0 2020-07-01 12:26:31 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13e10148-60ff-4237-93b3-42d6b038959d 0xc003598967 0xc003598968}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 101 49 48 49 52 56 45 54 48 102 102 45 52 50 51 55 45 57 51 98 51 45 52 50 100 54 98 48 51 56 57 53 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.096: INFO: Pod "webserver-deployment-6676bcd6d4-bzgmf" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-bzgmf webserver-deployment-6676bcd6d4- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-6676bcd6d4-bzgmf 5e08199d-91dd-4fdb-a4f5-f7f94ce5bba6 16804863 0 2020-07-01 12:26:28 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13e10148-60ff-4237-93b3-42d6b038959d 0xc003598aa7 0xc003598aa8}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 101 49 48 49 52 56 45 54 48 102 102 45 52 50 51 55 45 57 51 98 51 45 52 50 100 54 98 48 51 56 57 53 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 12:26:29 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-07-01 12:26:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.096: INFO: Pod "webserver-deployment-6676bcd6d4-ccsdw" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-ccsdw webserver-deployment-6676bcd6d4- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-6676bcd6d4-ccsdw d6e91295-02f7-40d8-b8c1-fa59df6414d4 16804839 0 2020-07-01 12:26:28 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13e10148-60ff-4237-93b3-42d6b038959d 0xc003598c57 0xc003598c58}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 101 49 48 49 52 56 45 54 48 102 102 45 52 50 51 55 45 57 51 98 51 45 52 50 100 54 98 48 51 56 57 53 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 12:26:28 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-07-01 12:26:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.097: INFO: Pod "webserver-deployment-6676bcd6d4-cdb96" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-cdb96 webserver-deployment-6676bcd6d4- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-6676bcd6d4-cdb96 b747835d-9924-4984-adfa-6f161f464882 16804913 0 2020-07-01 12:26:31 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13e10148-60ff-4237-93b3-42d6b038959d 0xc003598e07 0xc003598e08}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 101 49 48 49 52 56 45 54 48 102 102 45 52 50 51 55 45 57 51 98 51 45 52 50 100 54 98 48 51 56 57 53 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.097: INFO: Pod "webserver-deployment-6676bcd6d4-d2z7m" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-d2z7m webserver-deployment-6676bcd6d4- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-6676bcd6d4-d2z7m fd2427ea-ba2e-4214-bfcf-2d74ae40bfa6 16804859 0 2020-07-01 12:26:28 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13e10148-60ff-4237-93b3-42d6b038959d 0xc003598f57 0xc003598f58}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 101 49 48 49 52 56 45 54 48 102 102 45 52 50 51 55 45 57 51 98 51 45 52 50 100 54 98 48 51 56 57 53 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 12:26:29 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-07-01 12:26:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.097: INFO: Pod "webserver-deployment-6676bcd6d4-fh6b8" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-fh6b8 webserver-deployment-6676bcd6d4- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-6676bcd6d4-fh6b8 b4111c50-308b-4aa5-914e-4bbc54c70ed2 16804914 0 2020-07-01 12:26:31 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13e10148-60ff-4237-93b3-42d6b038959d 0xc003599107 0xc003599108}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 101 49 48 49 52 56 45 54 48 102 102 45 52 50 51 55 45 57 51 98 51 45 52 50 100 54 98 48 51 56 57 53 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.098: INFO: Pod "webserver-deployment-6676bcd6d4-hwk2g" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-hwk2g webserver-deployment-6676bcd6d4- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-6676bcd6d4-hwk2g fa7d8e9a-c27d-4990-bd00-cc88412a9ee6 16804934 0 2020-07-01 12:26:31 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13e10148-60ff-4237-93b3-42d6b038959d 0xc003599267 0xc003599268}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 101 49 48 49 52 56 45 54 48 102 102 45 52 50 51 55 45 57 51 98 51 45 52 50 100 54 98 48 51 56 57 53 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.098: INFO: Pod "webserver-deployment-6676bcd6d4-jrm6z" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-jrm6z webserver-deployment-6676bcd6d4- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-6676bcd6d4-jrm6z dd8483f9-6ba8-4a3c-a258-ded3f6bae21d 16804845 0 2020-07-01 12:26:28 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13e10148-60ff-4237-93b3-42d6b038959d 0xc0035993a7 0xc0035993a8}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 101 49 48 49 52 56 45 54 48 102 102 45 52 50 51 55 45 57 51 98 51 45 52 50 100 54 98 48 51 56 57 53 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 12:26:28 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-07-01 12:26:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.098: INFO: Pod "webserver-deployment-6676bcd6d4-n76vz" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-n76vz webserver-deployment-6676bcd6d4- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-6676bcd6d4-n76vz ad1bb7b9-1a08-4f89-b8a5-e1365547291d 16804921 0 2020-07-01 12:26:31 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13e10148-60ff-4237-93b3-42d6b038959d 0xc003599577 0xc003599578}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 101 49 48 49 52 56 45 54 48 102 102 45 52 50 51 55 45 57 51 98 51 45 52 50 100 54 98 48 51 56 57 53 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.099: INFO: Pod "webserver-deployment-6676bcd6d4-tcx7l" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-tcx7l webserver-deployment-6676bcd6d4- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-6676bcd6d4-tcx7l 516146a7-5d61-4bc1-9194-aa3ee255ff87 16804928 0 2020-07-01 12:26:31 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13e10148-60ff-4237-93b3-42d6b038959d 0xc0035996c7 0xc0035996c8}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 101 49 48 49 52 56 45 54 48 102 102 45 52 50 51 55 45 57 51 98 51 45 52 50 100 54 98 48 51 56 57 53 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.099: INFO: Pod "webserver-deployment-6676bcd6d4-wxln5" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-wxln5 webserver-deployment-6676bcd6d4- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-6676bcd6d4-wxln5 b946c607-b3bf-4e5d-875c-1c7c7714b6a9 16804888 0 2020-07-01 12:26:31 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13e10148-60ff-4237-93b3-42d6b038959d 0xc003599837 0xc003599838}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 101 49 48 49 52 56 45 54 48 102 102 45 52 50 51 55 45 57 51 98 51 45 52 50 100 54 98 48 51 56 57 53 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.099: INFO: Pod "webserver-deployment-84855cf797-26svn" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-26svn webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-26svn cd2be90b-bce9-4256-8596-f251a873d6fc 16804910 0 2020-07-01 12:26:31 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc003599977 0xc003599978}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.099: INFO: Pod "webserver-deployment-84855cf797-2thvp" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-2thvp webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-2thvp 2554bdf6-265f-490f-98b4-a745f02e6b12 16804948 0 2020-07-01 12:26:31 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc003599aa7 0xc003599aa8}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-07-01 12:26:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.100: INFO: Pod "webserver-deployment-84855cf797-659zs" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-659zs webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-659zs 07af7394-3a62-42dd-984d-938ede3f8887 16804806 0 2020-07-01 12:26:16 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc003599c37 0xc003599c38}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 12:26:27 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 49 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.212,StartTime:2020-07-01 12:26:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 12:26:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://16263fa633ee7bc5682d14a751689dca3e8c68406e89494190f5a07d93018788,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.212,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.100: INFO: Pod "webserver-deployment-84855cf797-97nmj" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-97nmj webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-97nmj e11d9afb-9916-4d3c-b87e-c40e86f0fb45 16804912 0 2020-07-01 12:26:31 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc003599de7 0xc003599de8}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.100: INFO: Pod "webserver-deployment-84855cf797-bv72x" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-bv72x webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-bv72x 443fb58b-a795-434d-a239-1696b7eeed86 16804909 0 2020-07-01 12:26:31 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc003599f17 0xc003599f18}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.101: INFO: Pod "webserver-deployment-84855cf797-bxq5d" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-bxq5d webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-bxq5d b9bbc132-6f7c-4131-95a7-32f18dfa5deb 16804800 0 2020-07-01 12:26:16 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc0027bc047 0xc0027bc048}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 12:26:27 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 49 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.218,StartTime:2020-07-01 12:26:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 12:26:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://19b43dc7eacd0b785db2b216411af407d42da4bbdf1e80e98615dd93f9694dbd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.218,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.101: INFO: Pod "webserver-deployment-84855cf797-db55r" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-db55r webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-db55r 04b52b61-3286-424e-93de-d94f3a4bdeea 16804773 0 2020-07-01 12:26:16 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc0027bc1f7 0xc0027bc1f8}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 12:26:26 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 48 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.209,StartTime:2020-07-01 12:26:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 12:26:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://03ff7d5e929fee6d80ba7972a93c0f1c566adb93871cbbbe23fcd1e001a524a1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.209,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.101: INFO: Pod "webserver-deployment-84855cf797-dh5nr" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-dh5nr webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-dh5nr 44523cd8-1c2c-4269-8929-bbabf1c387d2 16804738 0 2020-07-01 12:26:16 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc0027bc3a7 0xc0027bc3a8}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 12:26:22 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 48 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.208,StartTime:2020-07-01 12:26:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 12:26:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ccfc0f29a07dcd45cee3f078698e535fe60592ec197038185c9dedca52b51265,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.208,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.102: INFO: Pod "webserver-deployment-84855cf797-dxxg7" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-dxxg7 webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-dxxg7 0bec7540-6ea4-414e-9c54-0745262a748f 16804790 0 2020-07-01 12:26:16 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc0027bc557 0xc0027bc558}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 12:26:27 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 49 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.219,StartTime:2020-07-01 12:26:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 12:26:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c950d5b773d02eb3206a63a1245d712a843cba27fc95e4b0a5ae4c028c13ed60,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.219,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.102: INFO: Pod "webserver-deployment-84855cf797-gtczf" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-gtczf webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-gtczf 4720b530-8521-4ed8-801b-cec5db2c4382 16804933 0 2020-07-01 12:26:30 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc0027bc707 0xc0027bc708}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-07-01 12:26:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.102: INFO: Pod "webserver-deployment-84855cf797-hhbt5" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-hhbt5 webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-hhbt5 8f42bd04-9454-490e-bca1-46042e28fc7d 16804911 0 2020-07-01 12:26:31 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc0027bc897 0xc0027bc898}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.103: INFO: Pod "webserver-deployment-84855cf797-jhdvl" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-jhdvl webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-jhdvl 0e375296-2b5e-42b3-aef1-47bc5d046e79 16804926 0 2020-07-01 12:26:31 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc0027bc9c7 0xc0027bc9c8}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.103: INFO: Pod "webserver-deployment-84855cf797-jlvf2" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-jlvf2 webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-jlvf2 561c0772-cf7f-464b-bfde-6f6a42e68965 16804922 0 2020-07-01 12:26:31 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc0027bcaf7 0xc0027bcaf8}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.103: INFO: Pod "webserver-deployment-84855cf797-kmlbh" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-kmlbh webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-kmlbh 26fac90c-7957-40de-aced-6cccd8e9964d 16804746 0 2020-07-01 12:26:16 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc0027bcc67 0xc0027bcc68}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 12:26:23 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 49 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.216,StartTime:2020-07-01 12:26:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 12:26:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://43ce285670a1fc0d7c8f8711b5f97e67bcc4464e4ccd63afb8ad93094e73fcc4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.216,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.103: INFO: Pod "webserver-deployment-84855cf797-kpf9l" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-kpf9l webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-kpf9l b55edf42-bdfd-4b1f-b3a5-70a0ec45a2ef 16804930 0 2020-07-01 12:26:31 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc0027bd057 0xc0027bd058}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.104: INFO: Pod "webserver-deployment-84855cf797-l96jv" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-l96jv webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-l96jv eda542eb-f059-421f-8eb9-41153e9cb2a0 16804891 0 2020-07-01 12:26:31 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc0027bd317 0xc0027bd318}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.104: INFO: Pod "webserver-deployment-84855cf797-mtwb9" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-mtwb9 webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-mtwb9 e8f7c6b3-baad-4c65-9314-ae265b20f912 16804795 0 2020-07-01 12:26:16 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc0027bd637 0xc0027bd638}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 12:26:27 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 49 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.217,StartTime:2020-07-01 12:26:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 12:26:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://924ed13c320aefa6b142145f7beff6be1f432748ab56b0c623fb3ac7135c1143,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.217,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.104: INFO: Pod "webserver-deployment-84855cf797-mv5ns" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-mv5ns webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-mv5ns ccc213ca-a4fb-40e8-b85a-132af07e0d6a 16804753 0 2020-07-01 12:26:16 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc0027bd9a7 0xc0027bd9a8}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-01 12:26:24 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 49 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.215,StartTime:2020-07-01 12:26:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 12:26:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://150aa7eb937bf1f91d15d322bbabdeff0dded1fe2d591707a514607189a16d98,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.215,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.104: INFO: Pod "webserver-deployment-84855cf797-qw4lz" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-qw4lz webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-qw4lz 45aa252c-e66c-4e05-968d-03c463e4d0f7 16804929 0 2020-07-01 12:26:31 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc0027bdd87 0xc0027bdd88}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:26:32.105: INFO: Pod "webserver-deployment-84855cf797-x2lnh" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-x2lnh webserver-deployment-84855cf797- deployment-8494 /api/v1/namespaces/deployment-8494/pods/webserver-deployment-84855cf797-x2lnh 1fff01a7-ebac-4b3c-aa04-2faf5d51713f 16804920 0 2020-07-01 12:26:31 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3b8110a8-8d89-4cff-b4cb-830d8b8ee2fe 0xc00436c027 0xc00436c028}] []  [{kube-controller-manager Update v1 2020-07-01 12:26:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 56 49 49 48 97 56 45 56 100 56 57 45 52 99 102 102 45 98 52 99 98 45 56 51 48 100 56 98 56 101 101 50 102 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2phh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2phh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2phh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:26:32.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8494" for this suite.

• [SLOW TEST:15.835 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":194,"skipped":3279,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:26:32.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jul  1 12:26:32.735: INFO: Waiting up to 5m0s for pod "downward-api-6f581b10-a857-4a18-a962-c6a2021dba09" in namespace "downward-api-847" to be "Succeeded or Failed"
Jul  1 12:26:32.871: INFO: Pod "downward-api-6f581b10-a857-4a18-a962-c6a2021dba09": Phase="Pending", Reason="", readiness=false. Elapsed: 136.097789ms
Jul  1 12:26:34.923: INFO: Pod "downward-api-6f581b10-a857-4a18-a962-c6a2021dba09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187799145s
Jul  1 12:26:37.088: INFO: Pod "downward-api-6f581b10-a857-4a18-a962-c6a2021dba09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.352454828s
Jul  1 12:26:39.711: INFO: Pod "downward-api-6f581b10-a857-4a18-a962-c6a2021dba09": Phase="Pending", Reason="", readiness=false. Elapsed: 6.976430817s
Jul  1 12:26:42.124: INFO: Pod "downward-api-6f581b10-a857-4a18-a962-c6a2021dba09": Phase="Pending", Reason="", readiness=false. Elapsed: 9.389101881s
Jul  1 12:26:44.557: INFO: Pod "downward-api-6f581b10-a857-4a18-a962-c6a2021dba09": Phase="Pending", Reason="", readiness=false. Elapsed: 11.821834901s
Jul  1 12:26:47.365: INFO: Pod "downward-api-6f581b10-a857-4a18-a962-c6a2021dba09": Phase="Pending", Reason="", readiness=false. Elapsed: 14.629820598s
Jul  1 12:26:49.369: INFO: Pod "downward-api-6f581b10-a857-4a18-a962-c6a2021dba09": Phase="Running", Reason="", readiness=true. Elapsed: 16.634362199s
Jul  1 12:26:51.486: INFO: Pod "downward-api-6f581b10-a857-4a18-a962-c6a2021dba09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.750988868s
STEP: Saw pod success
Jul  1 12:26:51.486: INFO: Pod "downward-api-6f581b10-a857-4a18-a962-c6a2021dba09" satisfied condition "Succeeded or Failed"
Jul  1 12:26:51.562: INFO: Trying to get logs from node kali-worker pod downward-api-6f581b10-a857-4a18-a962-c6a2021dba09 container dapi-container: 
STEP: delete the pod
Jul  1 12:26:51.718: INFO: Waiting for pod downward-api-6f581b10-a857-4a18-a962-c6a2021dba09 to disappear
Jul  1 12:26:51.734: INFO: Pod downward-api-6f581b10-a857-4a18-a962-c6a2021dba09 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:26:51.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-847" for this suite.

• [SLOW TEST:20.450 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":195,"skipped":3279,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:26:52.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-e81c2160-178b-4756-8ba4-12e73e0411ea in namespace container-probe-5290
Jul  1 12:26:59.765: INFO: Started pod busybox-e81c2160-178b-4756-8ba4-12e73e0411ea in namespace container-probe-5290
STEP: checking the pod's current state and verifying that restartCount is present
Jul  1 12:26:59.768: INFO: Initial restart count of pod busybox-e81c2160-178b-4756-8ba4-12e73e0411ea is 0
Jul  1 12:27:54.191: INFO: Restart count of pod container-probe-5290/busybox-e81c2160-178b-4756-8ba4-12e73e0411ea is now 1 (54.422767778s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:27:54.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5290" for this suite.

• [SLOW TEST:61.539 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3288,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:27:54.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 12:27:54.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3676'
Jul  1 12:27:54.587: INFO: stderr: ""
Jul  1 12:27:54.587: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Jul  1 12:27:54.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3676'
Jul  1 12:27:56.293: INFO: stderr: ""
Jul  1 12:27:56.293: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jul  1 12:27:57.646: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  1 12:27:57.646: INFO: Found 0 / 1
Jul  1 12:27:58.957: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  1 12:27:58.957: INFO: Found 0 / 1
Jul  1 12:27:59.610: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  1 12:27:59.610: INFO: Found 0 / 1
Jul  1 12:28:00.484: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  1 12:28:00.484: INFO: Found 0 / 1
Jul  1 12:28:01.363: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  1 12:28:01.363: INFO: Found 0 / 1
Jul  1 12:28:02.298: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  1 12:28:02.298: INFO: Found 1 / 1
Jul  1 12:28:02.298: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul  1 12:28:02.302: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  1 12:28:02.302: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul  1 12:28:02.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe pod agnhost-master-sncv7 --namespace=kubectl-3676'
Jul  1 12:28:02.422: INFO: stderr: ""
Jul  1 12:28:02.422: INFO: stdout: "Name:         agnhost-master-sncv7\nNamespace:    kubectl-3676\nPriority:     0\nNode:         kali-worker/172.17.0.15\nStart Time:   Wed, 01 Jul 2020 12:27:54 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.2.227\nIPs:\n  IP:           10.244.2.227\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://a09bbf381f523b7b32f1fbe2d0eb541154085bd5bf7f00f6dffd365c49b71e5e\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 01 Jul 2020 12:28:01 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qrdq5 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-qrdq5:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-qrdq5\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                  Message\n  ----    ------     ----       ----                  -------\n  Normal  Scheduled    default-scheduler     Successfully assigned kubectl-3676/agnhost-master-sncv7 to kali-worker\n  Normal  Pulled     5s         kubelet, kali-worker  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    1s         kubelet, kali-worker  Created container agnhost-master\n  Normal  Started    1s         kubelet, kali-worker  Started container agnhost-master\n"
Jul  1 12:28:02.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-3676'
Jul  1 12:28:02.552: INFO: stderr: ""
Jul  1 12:28:02.552: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-3676\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: agnhost-master-sncv7\n"
Jul  1 12:28:02.552: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-3676'
Jul  1 12:28:02.682: INFO: stderr: ""
Jul  1 12:28:02.682: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-3676\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.100.11.227\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.2.227:6379\nSession Affinity:  None\nEvents:            \n"
Jul  1 12:28:02.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe node kali-control-plane'
Jul  1 12:28:02.869: INFO: stderr: ""
Jul  1 12:28:02.869: INFO: stdout: "Name:               kali-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kali-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Wed, 29 Apr 2020 09:30:59 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kali-control-plane\n  AcquireTime:     \n  RenewTime:       Wed, 01 Jul 2020 12:27:57 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Wed, 01 Jul 2020 12:25:08 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Wed, 01 Jul 2020 12:25:08 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Wed, 01 Jul 2020 12:25:08 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Wed, 01 Jul 2020 12:25:08 +0000   Wed, 29 Apr 2020 09:31:34 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.17.0.19\n  Hostname:    kali-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 2146cf85bed648199604ab2e0e9ac609\n  System UUID:                e83c0db4-babe-44fc-9dad-b5eeae6d23fd\n  Boot ID:                    ca2aa731-f890-4956-92a1-ff8c7560d571\n  Kernel Version:             4.15.0-88-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.18.2\n  Kube-Proxy Version:         v1.18.2\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-rvq2k                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     63d\n  kube-system                 coredns-66bff467f8-w6zxd                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     63d\n  kube-system                 etcd-kali-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         63d\n  kube-system                 kindnet-65djz                                 100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      63d\n  kube-system                 kube-apiserver-kali-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         63d\n  kube-system                 kube-controller-manager-kali-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         63d\n  kube-system                 kube-proxy-pnhtq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         63d\n  kube-system                 kube-scheduler-kali-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         63d\n  local-path-storage          local-path-provisioner-bd4bb6b75-6l9ph        0 (0%)        0 (0%)      0 (0%)           0 (0%)         63d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:              \n"
Jul  1 12:28:02.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe namespace kubectl-3676'
Jul  1 12:28:02.969: INFO: stderr: ""
Jul  1 12:28:02.969: INFO: stdout: "Name:         kubectl-3676\nLabels:       e2e-framework=kubectl\n              e2e-run=8a1527b4-9ada-482e-88f5-fefb873032fb\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:28:02.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3676" for this suite.

• [SLOW TEST:8.714 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":275,"completed":197,"skipped":3306,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:28:02.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0701 12:28:04.218153       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  1 12:28:04.218: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:28:04.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4780" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":198,"skipped":3331,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:28:04.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 12:28:04.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul  1 12:28:07.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8555 create -f -'
Jul  1 12:28:13.671: INFO: stderr: ""
Jul  1 12:28:13.671: INFO: stdout: "e2e-test-crd-publish-openapi-5586-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jul  1 12:28:13.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8555 delete e2e-test-crd-publish-openapi-5586-crds test-cr'
Jul  1 12:28:13.788: INFO: stderr: ""
Jul  1 12:28:13.788: INFO: stdout: "e2e-test-crd-publish-openapi-5586-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Jul  1 12:28:13.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8555 apply -f -'
Jul  1 12:28:14.109: INFO: stderr: ""
Jul  1 12:28:14.109: INFO: stdout: "e2e-test-crd-publish-openapi-5586-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jul  1 12:28:14.109: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8555 delete e2e-test-crd-publish-openapi-5586-crds test-cr'
Jul  1 12:28:14.210: INFO: stderr: ""
Jul  1 12:28:14.210: INFO: stdout: "e2e-test-crd-publish-openapi-5586-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jul  1 12:28:14.210: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5586-crds'
Jul  1 12:28:14.493: INFO: stderr: ""
Jul  1 12:28:14.493: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5586-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:28:17.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8555" for this suite.

• [SLOW TEST:13.409 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":199,"skipped":3340,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:28:17.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-44665ba3-aefd-4d00-9842-36169e2e0211
STEP: Creating a pod to test consume secrets
Jul  1 12:28:18.306: INFO: Waiting up to 5m0s for pod "pod-secrets-06a9d5d6-86ef-490f-9ed8-64b9a55238d8" in namespace "secrets-4028" to be "Succeeded or Failed"
Jul  1 12:28:18.312: INFO: Pod "pod-secrets-06a9d5d6-86ef-490f-9ed8-64b9a55238d8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.955084ms
Jul  1 12:28:20.315: INFO: Pod "pod-secrets-06a9d5d6-86ef-490f-9ed8-64b9a55238d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009811475s
Jul  1 12:28:22.320: INFO: Pod "pod-secrets-06a9d5d6-86ef-490f-9ed8-64b9a55238d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013914343s
STEP: Saw pod success
Jul  1 12:28:22.320: INFO: Pod "pod-secrets-06a9d5d6-86ef-490f-9ed8-64b9a55238d8" satisfied condition "Succeeded or Failed"
Jul  1 12:28:22.323: INFO: Trying to get logs from node kali-worker pod pod-secrets-06a9d5d6-86ef-490f-9ed8-64b9a55238d8 container secret-volume-test: 
STEP: delete the pod
Jul  1 12:28:23.013: INFO: Waiting for pod pod-secrets-06a9d5d6-86ef-490f-9ed8-64b9a55238d8 to disappear
Jul  1 12:28:23.037: INFO: Pod pod-secrets-06a9d5d6-86ef-490f-9ed8-64b9a55238d8 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:28:23.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4028" for this suite.

• [SLOW TEST:5.285 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3340,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:28:23.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching services
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:28:23.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2429" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":201,"skipped":3342,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:28:23.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5426.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5426.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5426.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5426.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5426.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5426.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  1 12:28:34.671: INFO: DNS probes using dns-5426/dns-test-1ef1d5e6-41be-49f4-a3e0-9bc0a481ed31 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:28:34.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5426" for this suite.

• [SLOW TEST:11.586 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":202,"skipped":3367,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:28:34.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-secret-jcnh
STEP: Creating a pod to test atomic-volume-subpath
Jul  1 12:28:35.110: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-jcnh" in namespace "subpath-6333" to be "Succeeded or Failed"
Jul  1 12:28:35.311: INFO: Pod "pod-subpath-test-secret-jcnh": Phase="Pending", Reason="", readiness=false. Elapsed: 200.723304ms
Jul  1 12:28:37.314: INFO: Pod "pod-subpath-test-secret-jcnh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204403835s
Jul  1 12:28:39.367: INFO: Pod "pod-subpath-test-secret-jcnh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257174742s
Jul  1 12:28:41.436: INFO: Pod "pod-subpath-test-secret-jcnh": Phase="Running", Reason="", readiness=true. Elapsed: 6.325967899s
Jul  1 12:28:43.440: INFO: Pod "pod-subpath-test-secret-jcnh": Phase="Running", Reason="", readiness=true. Elapsed: 8.330388519s
Jul  1 12:28:45.445: INFO: Pod "pod-subpath-test-secret-jcnh": Phase="Running", Reason="", readiness=true. Elapsed: 10.334939044s
Jul  1 12:28:47.451: INFO: Pod "pod-subpath-test-secret-jcnh": Phase="Running", Reason="", readiness=true. Elapsed: 12.340862948s
Jul  1 12:28:49.455: INFO: Pod "pod-subpath-test-secret-jcnh": Phase="Running", Reason="", readiness=true. Elapsed: 14.345235254s
Jul  1 12:28:51.508: INFO: Pod "pod-subpath-test-secret-jcnh": Phase="Running", Reason="", readiness=true. Elapsed: 16.397702963s
Jul  1 12:28:53.512: INFO: Pod "pod-subpath-test-secret-jcnh": Phase="Running", Reason="", readiness=true. Elapsed: 18.401870188s
Jul  1 12:28:55.516: INFO: Pod "pod-subpath-test-secret-jcnh": Phase="Running", Reason="", readiness=true. Elapsed: 20.405757409s
Jul  1 12:28:57.537: INFO: Pod "pod-subpath-test-secret-jcnh": Phase="Running", Reason="", readiness=true. Elapsed: 22.427491311s
Jul  1 12:28:59.604: INFO: Pod "pod-subpath-test-secret-jcnh": Phase="Running", Reason="", readiness=true. Elapsed: 24.493744394s
Jul  1 12:29:01.619: INFO: Pod "pod-subpath-test-secret-jcnh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.509473712s
STEP: Saw pod success
Jul  1 12:29:01.619: INFO: Pod "pod-subpath-test-secret-jcnh" satisfied condition "Succeeded or Failed"
Jul  1 12:29:01.623: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-secret-jcnh container test-container-subpath-secret-jcnh: 
STEP: delete the pod
Jul  1 12:29:01.701: INFO: Waiting for pod pod-subpath-test-secret-jcnh to disappear
Jul  1 12:29:01.708: INFO: Pod pod-subpath-test-secret-jcnh no longer exists
STEP: Deleting pod pod-subpath-test-secret-jcnh
Jul  1 12:29:01.708: INFO: Deleting pod "pod-subpath-test-secret-jcnh" in namespace "subpath-6333"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:29:01.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6333" for this suite.

• [SLOW TEST:26.889 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":203,"skipped":3367,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:29:01.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  1 12:29:02.487: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  1 12:29:04.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203342, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203342, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203342, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203342, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:29:06.502: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203342, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203342, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203342, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203342, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  1 12:29:09.563: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:29:09.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2641" for this suite.
STEP: Destroying namespace "webhook-2641-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.011 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":204,"skipped":3370,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:29:09.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-6d7b2fbf-0557-42bf-b928-565436a9edc2
STEP: Creating a pod to test consume configMaps
Jul  1 12:29:09.929: INFO: Waiting up to 5m0s for pod "pod-configmaps-df06eb94-f7ec-4b6e-a223-e6f8678a4238" in namespace "configmap-7405" to be "Succeeded or Failed"
Jul  1 12:29:09.971: INFO: Pod "pod-configmaps-df06eb94-f7ec-4b6e-a223-e6f8678a4238": Phase="Pending", Reason="", readiness=false. Elapsed: 41.841353ms
Jul  1 12:29:12.305: INFO: Pod "pod-configmaps-df06eb94-f7ec-4b6e-a223-e6f8678a4238": Phase="Pending", Reason="", readiness=false. Elapsed: 2.375705874s
Jul  1 12:29:14.742: INFO: Pod "pod-configmaps-df06eb94-f7ec-4b6e-a223-e6f8678a4238": Phase="Pending", Reason="", readiness=false. Elapsed: 4.813081206s
Jul  1 12:29:16.747: INFO: Pod "pod-configmaps-df06eb94-f7ec-4b6e-a223-e6f8678a4238": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.817701643s
STEP: Saw pod success
Jul  1 12:29:16.747: INFO: Pod "pod-configmaps-df06eb94-f7ec-4b6e-a223-e6f8678a4238" satisfied condition "Succeeded or Failed"
Jul  1 12:29:16.750: INFO: Trying to get logs from node kali-worker pod pod-configmaps-df06eb94-f7ec-4b6e-a223-e6f8678a4238 container configmap-volume-test: 
STEP: delete the pod
Jul  1 12:29:16.786: INFO: Waiting for pod pod-configmaps-df06eb94-f7ec-4b6e-a223-e6f8678a4238 to disappear
Jul  1 12:29:16.793: INFO: Pod pod-configmaps-df06eb94-f7ec-4b6e-a223-e6f8678a4238 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:29:16.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7405" for this suite.

• [SLOW TEST:7.097 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":205,"skipped":3390,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:29:16.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test hostPath mode
Jul  1 12:29:16.989: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6041" to be "Succeeded or Failed"
Jul  1 12:29:17.043: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 53.859943ms
Jul  1 12:29:19.047: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057571665s
Jul  1 12:29:21.095: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105566305s
Jul  1 12:29:23.099: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.109934818s
STEP: Saw pod success
Jul  1 12:29:23.099: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jul  1 12:29:23.102: INFO: Trying to get logs from node kali-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jul  1 12:29:23.173: INFO: Waiting for pod pod-host-path-test to disappear
Jul  1 12:29:23.328: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:29:23.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-6041" for this suite.

• [SLOW TEST:6.548 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3446,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:29:23.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul  1 12:29:23.515: INFO: Waiting up to 5m0s for pod "pod-8b7f0537-8290-4392-b861-283d6fc6a982" in namespace "emptydir-1504" to be "Succeeded or Failed"
Jul  1 12:29:23.543: INFO: Pod "pod-8b7f0537-8290-4392-b861-283d6fc6a982": Phase="Pending", Reason="", readiness=false. Elapsed: 28.071829ms
Jul  1 12:29:25.562: INFO: Pod "pod-8b7f0537-8290-4392-b861-283d6fc6a982": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047108556s
Jul  1 12:29:27.566: INFO: Pod "pod-8b7f0537-8290-4392-b861-283d6fc6a982": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051565137s
STEP: Saw pod success
Jul  1 12:29:27.566: INFO: Pod "pod-8b7f0537-8290-4392-b861-283d6fc6a982" satisfied condition "Succeeded or Failed"
Jul  1 12:29:27.570: INFO: Trying to get logs from node kali-worker pod pod-8b7f0537-8290-4392-b861-283d6fc6a982 container test-container: 
STEP: delete the pod
Jul  1 12:29:27.592: INFO: Waiting for pod pod-8b7f0537-8290-4392-b861-283d6fc6a982 to disappear
Jul  1 12:29:27.596: INFO: Pod pod-8b7f0537-8290-4392-b861-283d6fc6a982 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:29:27.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1504" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":207,"skipped":3446,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:29:27.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul  1 12:29:32.770: INFO: Successfully updated pod "pod-update-activedeadlineseconds-02ed56d5-6848-4559-a73e-9c773dc7198c"
Jul  1 12:29:32.770: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-02ed56d5-6848-4559-a73e-9c773dc7198c" in namespace "pods-711" to be "terminated due to deadline exceeded"
Jul  1 12:29:32.788: INFO: Pod "pod-update-activedeadlineseconds-02ed56d5-6848-4559-a73e-9c773dc7198c": Phase="Running", Reason="", readiness=true. Elapsed: 18.361354ms
Jul  1 12:29:34.806: INFO: Pod "pod-update-activedeadlineseconds-02ed56d5-6848-4559-a73e-9c773dc7198c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.036720824s
Jul  1 12:29:34.806: INFO: Pod "pod-update-activedeadlineseconds-02ed56d5-6848-4559-a73e-9c773dc7198c" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:29:34.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-711" for this suite.

• [SLOW TEST:7.629 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3465,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:29:35.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-b35deac0-fda3-4b96-9a16-4f8d949425ed
STEP: Creating a pod to test consume configMaps
Jul  1 12:29:35.939: INFO: Waiting up to 5m0s for pod "pod-configmaps-26f25ab0-42d9-49a6-9bc9-3eb038018d59" in namespace "configmap-4260" to be "Succeeded or Failed"
Jul  1 12:29:35.987: INFO: Pod "pod-configmaps-26f25ab0-42d9-49a6-9bc9-3eb038018d59": Phase="Pending", Reason="", readiness=false. Elapsed: 48.450452ms
Jul  1 12:29:37.991: INFO: Pod "pod-configmaps-26f25ab0-42d9-49a6-9bc9-3eb038018d59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052527376s
Jul  1 12:29:39.996: INFO: Pod "pod-configmaps-26f25ab0-42d9-49a6-9bc9-3eb038018d59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056726794s
STEP: Saw pod success
Jul  1 12:29:39.996: INFO: Pod "pod-configmaps-26f25ab0-42d9-49a6-9bc9-3eb038018d59" satisfied condition "Succeeded or Failed"
Jul  1 12:29:39.998: INFO: Trying to get logs from node kali-worker pod pod-configmaps-26f25ab0-42d9-49a6-9bc9-3eb038018d59 container configmap-volume-test: 
STEP: delete the pod
Jul  1 12:29:40.039: INFO: Waiting for pod pod-configmaps-26f25ab0-42d9-49a6-9bc9-3eb038018d59 to disappear
Jul  1 12:29:40.052: INFO: Pod pod-configmaps-26f25ab0-42d9-49a6-9bc9-3eb038018d59 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:29:40.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4260" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":209,"skipped":3496,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}

------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:29:40.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-1bba39e5-8742-40dc-9505-73c2165f2ee5
STEP: Creating a pod to test consume secrets
Jul  1 12:29:40.247: INFO: Waiting up to 5m0s for pod "pod-secrets-b5e0923b-d93e-4516-89e6-38782ec907f7" in namespace "secrets-8122" to be "Succeeded or Failed"
Jul  1 12:29:40.302: INFO: Pod "pod-secrets-b5e0923b-d93e-4516-89e6-38782ec907f7": Phase="Pending", Reason="", readiness=false. Elapsed: 55.686331ms
Jul  1 12:29:42.306: INFO: Pod "pod-secrets-b5e0923b-d93e-4516-89e6-38782ec907f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059727144s
Jul  1 12:29:44.310: INFO: Pod "pod-secrets-b5e0923b-d93e-4516-89e6-38782ec907f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063618148s
STEP: Saw pod success
Jul  1 12:29:44.310: INFO: Pod "pod-secrets-b5e0923b-d93e-4516-89e6-38782ec907f7" satisfied condition "Succeeded or Failed"
Jul  1 12:29:44.312: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-b5e0923b-d93e-4516-89e6-38782ec907f7 container secret-volume-test: 
STEP: delete the pod
Jul  1 12:29:44.370: INFO: Waiting for pod pod-secrets-b5e0923b-d93e-4516-89e6-38782ec907f7 to disappear
Jul  1 12:29:44.382: INFO: Pod pod-secrets-b5e0923b-d93e-4516-89e6-38782ec907f7 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:29:44.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8122" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3496,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:29:44.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jul  1 12:29:44.450: INFO: Waiting up to 5m0s for pod "downward-api-e842903c-6eed-4812-bac0-9e6f539052f8" in namespace "downward-api-7260" to be "Succeeded or Failed"
Jul  1 12:29:44.483: INFO: Pod "downward-api-e842903c-6eed-4812-bac0-9e6f539052f8": Phase="Pending", Reason="", readiness=false. Elapsed: 32.326933ms
Jul  1 12:29:46.487: INFO: Pod "downward-api-e842903c-6eed-4812-bac0-9e6f539052f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036210179s
Jul  1 12:29:48.491: INFO: Pod "downward-api-e842903c-6eed-4812-bac0-9e6f539052f8": Phase="Running", Reason="", readiness=true. Elapsed: 4.040509518s
Jul  1 12:29:50.495: INFO: Pod "downward-api-e842903c-6eed-4812-bac0-9e6f539052f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044834544s
STEP: Saw pod success
Jul  1 12:29:50.495: INFO: Pod "downward-api-e842903c-6eed-4812-bac0-9e6f539052f8" satisfied condition "Succeeded or Failed"
Jul  1 12:29:50.498: INFO: Trying to get logs from node kali-worker2 pod downward-api-e842903c-6eed-4812-bac0-9e6f539052f8 container dapi-container: 
STEP: delete the pod
Jul  1 12:29:50.606: INFO: Waiting for pod downward-api-e842903c-6eed-4812-bac0-9e6f539052f8 to disappear
Jul  1 12:29:50.627: INFO: Pod downward-api-e842903c-6eed-4812-bac0-9e6f539052f8 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:29:50.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7260" for this suite.

• [SLOW TEST:6.246 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3502,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:29:50.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating all guestbook components
Jul  1 12:29:50.872: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Jul  1 12:29:50.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6134'
Jul  1 12:29:51.358: INFO: stderr: ""
Jul  1 12:29:51.358: INFO: stdout: "service/agnhost-slave created\n"
Jul  1 12:29:51.359: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Jul  1 12:29:51.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6134'
Jul  1 12:29:51.897: INFO: stderr: ""
Jul  1 12:29:51.897: INFO: stdout: "service/agnhost-master created\n"
Jul  1 12:29:51.897: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jul  1 12:29:51.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6134'
Jul  1 12:29:52.279: INFO: stderr: ""
Jul  1 12:29:52.279: INFO: stdout: "service/frontend created\n"
Jul  1 12:29:52.279: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Jul  1 12:29:52.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6134'
Jul  1 12:29:52.568: INFO: stderr: ""
Jul  1 12:29:52.568: INFO: stdout: "deployment.apps/frontend created\n"
Jul  1 12:29:52.569: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jul  1 12:29:52.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6134'
Jul  1 12:29:52.930: INFO: stderr: ""
Jul  1 12:29:52.930: INFO: stdout: "deployment.apps/agnhost-master created\n"
Jul  1 12:29:52.930: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jul  1 12:29:52.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6134'
Jul  1 12:29:53.221: INFO: stderr: ""
Jul  1 12:29:53.221: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Jul  1 12:29:53.221: INFO: Waiting for all frontend pods to be Running.
Jul  1 12:30:03.272: INFO: Waiting for frontend to serve content.
Jul  1 12:30:03.283: INFO: Trying to add a new entry to the guestbook.
Jul  1 12:30:03.293: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jul  1 12:30:03.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6134'
Jul  1 12:30:03.566: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  1 12:30:03.566: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Jul  1 12:30:03.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6134'
Jul  1 12:30:03.807: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  1 12:30:03.807: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jul  1 12:30:03.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6134'
Jul  1 12:30:03.993: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  1 12:30:03.993: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul  1 12:30:03.993: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6134'
Jul  1 12:30:04.166: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  1 12:30:04.166: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul  1 12:30:04.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6134'
Jul  1 12:30:04.315: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  1 12:30:04.316: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jul  1 12:30:04.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6134'
Jul  1 12:30:04.827: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  1 12:30:04.827: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:30:04.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6134" for this suite.

• [SLOW TEST:14.232 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":275,"completed":212,"skipped":3514,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:30:04.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-fc19561d-5933-408b-b868-44b2f8983775
STEP: Creating a pod to test consume secrets
Jul  1 12:30:05.469: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-62425ef9-ad9c-4401-b2b1-2a7f5822bb95" in namespace "projected-4297" to be "Succeeded or Failed"
Jul  1 12:30:05.520: INFO: Pod "pod-projected-secrets-62425ef9-ad9c-4401-b2b1-2a7f5822bb95": Phase="Pending", Reason="", readiness=false. Elapsed: 51.555559ms
Jul  1 12:30:07.532: INFO: Pod "pod-projected-secrets-62425ef9-ad9c-4401-b2b1-2a7f5822bb95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063391654s
Jul  1 12:30:09.536: INFO: Pod "pod-projected-secrets-62425ef9-ad9c-4401-b2b1-2a7f5822bb95": Phase="Running", Reason="", readiness=true. Elapsed: 4.067294125s
Jul  1 12:30:11.586: INFO: Pod "pod-projected-secrets-62425ef9-ad9c-4401-b2b1-2a7f5822bb95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117703459s
STEP: Saw pod success
Jul  1 12:30:11.586: INFO: Pod "pod-projected-secrets-62425ef9-ad9c-4401-b2b1-2a7f5822bb95" satisfied condition "Succeeded or Failed"
Jul  1 12:30:11.589: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-62425ef9-ad9c-4401-b2b1-2a7f5822bb95 container projected-secret-volume-test: 
STEP: delete the pod
Jul  1 12:30:11.632: INFO: Waiting for pod pod-projected-secrets-62425ef9-ad9c-4401-b2b1-2a7f5822bb95 to disappear
Jul  1 12:30:11.648: INFO: Pod pod-projected-secrets-62425ef9-ad9c-4401-b2b1-2a7f5822bb95 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:30:11.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4297" for this suite.

• [SLOW TEST:6.788 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3521,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:30:11.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:30:11.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3616" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":214,"skipped":3544,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:30:11.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jul  1 12:30:11.871: INFO: Waiting up to 5m0s for pod "downward-api-6a5a4710-a740-402a-b524-ceccfe9dec58" in namespace "downward-api-58" to be "Succeeded or Failed"
Jul  1 12:30:11.905: INFO: Pod "downward-api-6a5a4710-a740-402a-b524-ceccfe9dec58": Phase="Pending", Reason="", readiness=false. Elapsed: 33.818595ms
Jul  1 12:30:14.035: INFO: Pod "downward-api-6a5a4710-a740-402a-b524-ceccfe9dec58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164020476s
Jul  1 12:30:16.043: INFO: Pod "downward-api-6a5a4710-a740-402a-b524-ceccfe9dec58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.17198365s
STEP: Saw pod success
Jul  1 12:30:16.043: INFO: Pod "downward-api-6a5a4710-a740-402a-b524-ceccfe9dec58" satisfied condition "Succeeded or Failed"
Jul  1 12:30:16.077: INFO: Trying to get logs from node kali-worker2 pod downward-api-6a5a4710-a740-402a-b524-ceccfe9dec58 container dapi-container: 
STEP: delete the pod
Jul  1 12:30:16.114: INFO: Waiting for pod downward-api-6a5a4710-a740-402a-b524-ceccfe9dec58 to disappear
Jul  1 12:30:16.127: INFO: Pod downward-api-6a5a4710-a740-402a-b524-ceccfe9dec58 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:30:16.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-58" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3546,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:30:16.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-5985/configmap-test-d51ae89b-2c77-46b7-b33c-4e92a0e4eafa
STEP: Creating a pod to test consume configMaps
Jul  1 12:30:16.582: INFO: Waiting up to 5m0s for pod "pod-configmaps-438405ed-b8ce-4ae4-b4b7-c92dc0fae428" in namespace "configmap-5985" to be "Succeeded or Failed"
Jul  1 12:30:16.613: INFO: Pod "pod-configmaps-438405ed-b8ce-4ae4-b4b7-c92dc0fae428": Phase="Pending", Reason="", readiness=false. Elapsed: 31.155922ms
Jul  1 12:30:18.618: INFO: Pod "pod-configmaps-438405ed-b8ce-4ae4-b4b7-c92dc0fae428": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035536983s
Jul  1 12:30:20.879: INFO: Pod "pod-configmaps-438405ed-b8ce-4ae4-b4b7-c92dc0fae428": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29731017s
Jul  1 12:30:22.898: INFO: Pod "pod-configmaps-438405ed-b8ce-4ae4-b4b7-c92dc0fae428": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.315631969s
STEP: Saw pod success
Jul  1 12:30:22.898: INFO: Pod "pod-configmaps-438405ed-b8ce-4ae4-b4b7-c92dc0fae428" satisfied condition "Succeeded or Failed"
Jul  1 12:30:22.901: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-438405ed-b8ce-4ae4-b4b7-c92dc0fae428 container env-test: 
STEP: delete the pod
Jul  1 12:30:22.926: INFO: Waiting for pod pod-configmaps-438405ed-b8ce-4ae4-b4b7-c92dc0fae428 to disappear
Jul  1 12:30:22.948: INFO: Pod pod-configmaps-438405ed-b8ce-4ae4-b4b7-c92dc0fae428 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:30:22.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5985" for this suite.

• [SLOW TEST:6.814 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":216,"skipped":3551,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:30:22.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating replication controller my-hostname-basic-29a3fbc9-c5be-4dd2-bc37-226ff995851e
Jul  1 12:30:23.067: INFO: Pod name my-hostname-basic-29a3fbc9-c5be-4dd2-bc37-226ff995851e: Found 0 pods out of 1
Jul  1 12:30:28.128: INFO: Pod name my-hostname-basic-29a3fbc9-c5be-4dd2-bc37-226ff995851e: Found 1 pods out of 1
Jul  1 12:30:28.128: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-29a3fbc9-c5be-4dd2-bc37-226ff995851e" are running
Jul  1 12:30:28.131: INFO: Pod "my-hostname-basic-29a3fbc9-c5be-4dd2-bc37-226ff995851e-h4xx4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 12:30:23 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 12:30:27 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 12:30:27 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 12:30:23 +0000 UTC Reason: Message:}])
Jul  1 12:30:28.131: INFO: Trying to dial the pod
Jul  1 12:30:33.143: INFO: Controller my-hostname-basic-29a3fbc9-c5be-4dd2-bc37-226ff995851e: Got expected result from replica 1 [my-hostname-basic-29a3fbc9-c5be-4dd2-bc37-226ff995851e-h4xx4]: "my-hostname-basic-29a3fbc9-c5be-4dd2-bc37-226ff995851e-h4xx4", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:30:33.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6330" for this suite.

• [SLOW TEST:10.196 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":217,"skipped":3605,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:30:33.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul  1 12:30:33.249: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51e9f4e7-cfdb-44a2-a2eb-7cbdad71ff74" in namespace "downward-api-7715" to be "Succeeded or Failed"
Jul  1 12:30:33.262: INFO: Pod "downwardapi-volume-51e9f4e7-cfdb-44a2-a2eb-7cbdad71ff74": Phase="Pending", Reason="", readiness=false. Elapsed: 12.840017ms
Jul  1 12:30:35.267: INFO: Pod "downwardapi-volume-51e9f4e7-cfdb-44a2-a2eb-7cbdad71ff74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017752178s
Jul  1 12:30:37.299: INFO: Pod "downwardapi-volume-51e9f4e7-cfdb-44a2-a2eb-7cbdad71ff74": Phase="Running", Reason="", readiness=true. Elapsed: 4.050285421s
Jul  1 12:30:39.302: INFO: Pod "downwardapi-volume-51e9f4e7-cfdb-44a2-a2eb-7cbdad71ff74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05283137s
STEP: Saw pod success
Jul  1 12:30:39.302: INFO: Pod "downwardapi-volume-51e9f4e7-cfdb-44a2-a2eb-7cbdad71ff74" satisfied condition "Succeeded or Failed"
Jul  1 12:30:39.303: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-51e9f4e7-cfdb-44a2-a2eb-7cbdad71ff74 container client-container: 
STEP: delete the pod
Jul  1 12:30:39.432: INFO: Waiting for pod downwardapi-volume-51e9f4e7-cfdb-44a2-a2eb-7cbdad71ff74 to disappear
Jul  1 12:30:39.446: INFO: Pod downwardapi-volume-51e9f4e7-cfdb-44a2-a2eb-7cbdad71ff74 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:30:39.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7715" for this suite.

• [SLOW TEST:6.300 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3628,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:30:39.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  1 12:30:41.246: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  1 12:30:43.255: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203441, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203441, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203441, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203441, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:30:45.371: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203441, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203441, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203441, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203441, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  1 12:30:48.291: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 12:30:48.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:30:49.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6766" for this suite.
STEP: Destroying namespace "webhook-6766-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.130 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":219,"skipped":3693,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:30:49.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jul  1 12:30:49.661: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the sample API server.
Jul  1 12:30:50.570: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jul  1 12:30:53.204: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203450, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203450, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203450, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203450, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:30:55.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203450, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203450, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203450, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203450, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:30:57.288: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203450, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203450, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203450, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203450, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:31:00.632: INFO: Waited 1.412492245s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:31:02.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-8234" for this suite.

• [SLOW TEST:13.130 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":220,"skipped":3706,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:31:02.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul  1 12:31:13.125: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  1 12:31:13.195: INFO: Pod pod-with-poststart-http-hook still exists
Jul  1 12:31:15.195: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  1 12:31:15.201: INFO: Pod pod-with-poststart-http-hook still exists
Jul  1 12:31:17.195: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  1 12:31:17.200: INFO: Pod pod-with-poststart-http-hook still exists
Jul  1 12:31:19.195: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  1 12:31:19.200: INFO: Pod pod-with-poststart-http-hook still exists
Jul  1 12:31:21.195: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  1 12:31:21.199: INFO: Pod pod-with-poststart-http-hook still exists
Jul  1 12:31:23.195: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  1 12:31:23.202: INFO: Pod pod-with-poststart-http-hook still exists
Jul  1 12:31:25.195: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  1 12:31:25.200: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:31:25.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4400" for this suite.

• [SLOW TEST:22.495 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3751,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:31:25.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:31:25.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8075" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":222,"skipped":3770,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:31:25.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:31:29.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4145" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":223,"skipped":3800,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:31:29.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul  1 12:31:30.036: INFO: Waiting up to 5m0s for pod "pod-da8c58a8-f47b-4b59-a990-f2201c75acdd" in namespace "emptydir-1327" to be "Succeeded or Failed"
Jul  1 12:31:30.062: INFO: Pod "pod-da8c58a8-f47b-4b59-a990-f2201c75acdd": Phase="Pending", Reason="", readiness=false. Elapsed: 25.432482ms
Jul  1 12:31:32.066: INFO: Pod "pod-da8c58a8-f47b-4b59-a990-f2201c75acdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029896523s
Jul  1 12:31:34.070: INFO: Pod "pod-da8c58a8-f47b-4b59-a990-f2201c75acdd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033831737s
Jul  1 12:31:36.075: INFO: Pod "pod-da8c58a8-f47b-4b59-a990-f2201c75acdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038655001s
STEP: Saw pod success
Jul  1 12:31:36.075: INFO: Pod "pod-da8c58a8-f47b-4b59-a990-f2201c75acdd" satisfied condition "Succeeded or Failed"
Jul  1 12:31:36.078: INFO: Trying to get logs from node kali-worker pod pod-da8c58a8-f47b-4b59-a990-f2201c75acdd container test-container: 
STEP: delete the pod
Jul  1 12:31:36.132: INFO: Waiting for pod pod-da8c58a8-f47b-4b59-a990-f2201c75acdd to disappear
Jul  1 12:31:36.149: INFO: Pod pod-da8c58a8-f47b-4b59-a990-f2201c75acdd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:31:36.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1327" for this suite.

• [SLOW TEST:6.182 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3806,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:31:36.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  1 12:31:36.555: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  1 12:31:38.575: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203496, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203496, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203496, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203496, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  1 12:31:41.616: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Jul  1 12:31:41.639: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:31:41.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7125" for this suite.
STEP: Destroying namespace "webhook-7125-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:5.692 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":225,"skipped":3852,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:31:41.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 12:31:42.071: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config version'
Jul  1 12:31:42.331: INFO: stderr: ""
Jul  1 12:31:42.332: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-06-08T19:09:43Z\", GoVersion:\"go1.13.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:31:42.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-941" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":275,"completed":226,"skipped":3872,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:31:42.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul  1 12:31:43.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9507'
Jul  1 12:31:43.270: INFO: stderr: ""
Jul  1 12:31:43.270: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423
Jul  1 12:31:43.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9507'
Jul  1 12:31:45.566: INFO: stderr: ""
Jul  1 12:31:45.566: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:31:45.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9507" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":275,"completed":227,"skipped":3877,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:31:45.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Jul  1 12:31:46.368: INFO: >>> kubeConfig: /root/.kube/config
Jul  1 12:31:49.373: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:32:00.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4846" for this suite.

• [SLOW TEST:14.409 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":228,"skipped":3882,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:32:00.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:32:11.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9556" for this suite.

• [SLOW TEST:11.201 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":229,"skipped":3884,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:32:11.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul  1 12:32:11.465: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc1a9644-a9c8-4017-8584-b65165def008" in namespace "projected-8125" to be "Succeeded or Failed"
Jul  1 12:32:11.481: INFO: Pod "downwardapi-volume-bc1a9644-a9c8-4017-8584-b65165def008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.104875ms
Jul  1 12:32:13.485: INFO: Pod "downwardapi-volume-bc1a9644-a9c8-4017-8584-b65165def008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019585647s
Jul  1 12:32:15.546: INFO: Pod "downwardapi-volume-bc1a9644-a9c8-4017-8584-b65165def008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081006385s
STEP: Saw pod success
Jul  1 12:32:15.546: INFO: Pod "downwardapi-volume-bc1a9644-a9c8-4017-8584-b65165def008" satisfied condition "Succeeded or Failed"
Jul  1 12:32:15.550: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-bc1a9644-a9c8-4017-8584-b65165def008 container client-container: 
STEP: delete the pod
Jul  1 12:32:15.593: INFO: Waiting for pod downwardapi-volume-bc1a9644-a9c8-4017-8584-b65165def008 to disappear
Jul  1 12:32:15.619: INFO: Pod downwardapi-volume-bc1a9644-a9c8-4017-8584-b65165def008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:32:15.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8125" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":3886,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:32:15.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jul  1 12:32:15.808: INFO: Pod name pod-release: Found 0 pods out of 1
Jul  1 12:32:21.001: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:32:21.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9431" for this suite.

• [SLOW TEST:5.840 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":231,"skipped":3891,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:32:21.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name secret-emptykey-test-f4bdaade-d060-494f-9d4f-3d5db2a5e679
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:32:22.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1665" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":232,"skipped":3912,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}

------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:32:22.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0701 12:32:35.322685       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  1 12:32:35.322: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:32:35.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5001" for this suite.

• [SLOW TEST:13.047 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":233,"skipped":3912,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:32:35.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating pod
Jul  1 12:32:40.871: INFO: Pod pod-hostip-69bf75c6-0770-42ef-8357-971ab59d62a9 has hostIP: 172.17.0.18
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:32:40.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5338" for this suite.

• [SLOW TEST:5.411 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":3922,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:32:40.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul  1 12:32:47.330: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:32:47.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5437" for this suite.

• [SLOW TEST:6.656 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":235,"skipped":3984,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:32:47.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:32:53.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2575" for this suite.

• [SLOW TEST:6.155 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":3988,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:32:53.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 12:32:53.834: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:33:00.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-118" for this suite.

• [SLOW TEST:6.557 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":275,"completed":237,"skipped":3997,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:33:00.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-336349f0-b212-4622-bf0b-92e3fd177328
STEP: Creating a pod to test consume secrets
Jul  1 12:33:00.421: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-65661315-00d4-4221-bc06-3973efc48917" in namespace "projected-2684" to be "Succeeded or Failed"
Jul  1 12:33:00.467: INFO: Pod "pod-projected-secrets-65661315-00d4-4221-bc06-3973efc48917": Phase="Pending", Reason="", readiness=false. Elapsed: 46.06417ms
Jul  1 12:33:02.475: INFO: Pod "pod-projected-secrets-65661315-00d4-4221-bc06-3973efc48917": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053702018s
Jul  1 12:33:04.479: INFO: Pod "pod-projected-secrets-65661315-00d4-4221-bc06-3973efc48917": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057280254s
STEP: Saw pod success
Jul  1 12:33:04.479: INFO: Pod "pod-projected-secrets-65661315-00d4-4221-bc06-3973efc48917" satisfied condition "Succeeded or Failed"
Jul  1 12:33:04.482: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-65661315-00d4-4221-bc06-3973efc48917 container projected-secret-volume-test: 
STEP: delete the pod
Jul  1 12:33:04.673: INFO: Waiting for pod pod-projected-secrets-65661315-00d4-4221-bc06-3973efc48917 to disappear
Jul  1 12:33:04.687: INFO: Pod pod-projected-secrets-65661315-00d4-4221-bc06-3973efc48917 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:33:04.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2684" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":4024,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:33:04.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul  1 12:33:04.831: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68581205-8539-458a-a6cd-df9e2411e72f" in namespace "projected-7005" to be "Succeeded or Failed"
Jul  1 12:33:04.869: INFO: Pod "downwardapi-volume-68581205-8539-458a-a6cd-df9e2411e72f": Phase="Pending", Reason="", readiness=false. Elapsed: 38.081046ms
Jul  1 12:33:06.887: INFO: Pod "downwardapi-volume-68581205-8539-458a-a6cd-df9e2411e72f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056070165s
Jul  1 12:33:08.892: INFO: Pod "downwardapi-volume-68581205-8539-458a-a6cd-df9e2411e72f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060294317s
STEP: Saw pod success
Jul  1 12:33:08.892: INFO: Pod "downwardapi-volume-68581205-8539-458a-a6cd-df9e2411e72f" satisfied condition "Succeeded or Failed"
Jul  1 12:33:08.895: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-68581205-8539-458a-a6cd-df9e2411e72f container client-container: 
STEP: delete the pod
Jul  1 12:33:09.069: INFO: Waiting for pod downwardapi-volume-68581205-8539-458a-a6cd-df9e2411e72f to disappear
Jul  1 12:33:09.124: INFO: Pod downwardapi-volume-68581205-8539-458a-a6cd-df9e2411e72f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:33:09.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7005" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":4027,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:33:09.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-94af63bc-78a2-4bd3-bee4-a676f0041c8e
STEP: Creating a pod to test consume configMaps
Jul  1 12:33:09.320: INFO: Waiting up to 5m0s for pod "pod-configmaps-de01261f-f3a6-4b0e-8b18-f28ce6897745" in namespace "configmap-4953" to be "Succeeded or Failed"
Jul  1 12:33:09.328: INFO: Pod "pod-configmaps-de01261f-f3a6-4b0e-8b18-f28ce6897745": Phase="Pending", Reason="", readiness=false. Elapsed: 8.188837ms
Jul  1 12:33:11.332: INFO: Pod "pod-configmaps-de01261f-f3a6-4b0e-8b18-f28ce6897745": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012401308s
Jul  1 12:33:13.336: INFO: Pod "pod-configmaps-de01261f-f3a6-4b0e-8b18-f28ce6897745": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016546481s
STEP: Saw pod success
Jul  1 12:33:13.336: INFO: Pod "pod-configmaps-de01261f-f3a6-4b0e-8b18-f28ce6897745" satisfied condition "Succeeded or Failed"
Jul  1 12:33:13.339: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-de01261f-f3a6-4b0e-8b18-f28ce6897745 container configmap-volume-test: 
STEP: delete the pod
Jul  1 12:33:13.420: INFO: Waiting for pod pod-configmaps-de01261f-f3a6-4b0e-8b18-f28ce6897745 to disappear
Jul  1 12:33:13.461: INFO: Pod pod-configmaps-de01261f-f3a6-4b0e-8b18-f28ce6897745 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:33:13.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4953" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":240,"skipped":4041,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:33:13.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on tmpfs
Jul  1 12:33:13.610: INFO: Waiting up to 5m0s for pod "pod-532603fd-4924-4a20-b472-b9c300a6e601" in namespace "emptydir-5445" to be "Succeeded or Failed"
Jul  1 12:33:13.612: INFO: Pod "pod-532603fd-4924-4a20-b472-b9c300a6e601": Phase="Pending", Reason="", readiness=false. Elapsed: 2.275389ms
Jul  1 12:33:15.618: INFO: Pod "pod-532603fd-4924-4a20-b472-b9c300a6e601": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007909143s
Jul  1 12:33:17.621: INFO: Pod "pod-532603fd-4924-4a20-b472-b9c300a6e601": Phase="Running", Reason="", readiness=true. Elapsed: 4.011251037s
Jul  1 12:33:19.625: INFO: Pod "pod-532603fd-4924-4a20-b472-b9c300a6e601": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015447292s
STEP: Saw pod success
Jul  1 12:33:19.625: INFO: Pod "pod-532603fd-4924-4a20-b472-b9c300a6e601" satisfied condition "Succeeded or Failed"
Jul  1 12:33:19.628: INFO: Trying to get logs from node kali-worker pod pod-532603fd-4924-4a20-b472-b9c300a6e601 container test-container: 
STEP: delete the pod
Jul  1 12:33:19.769: INFO: Waiting for pod pod-532603fd-4924-4a20-b472-b9c300a6e601 to disappear
Jul  1 12:33:19.819: INFO: Pod pod-532603fd-4924-4a20-b472-b9c300a6e601 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:33:19.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5445" for this suite.

• [SLOW TEST:6.358 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":241,"skipped":4091,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:33:19.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:34:20.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2850" for this suite.

• [SLOW TEST:60.724 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":242,"skipped":4104,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:34:20.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-3661
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul  1 12:34:20.811: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jul  1 12:34:20.972: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul  1 12:34:23.511: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul  1 12:34:25.099: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul  1 12:34:27.086: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 12:34:28.994: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 12:34:30.977: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 12:34:32.976: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 12:34:34.976: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 12:34:36.976: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 12:34:38.976: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul  1 12:34:40.977: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jul  1 12:34:40.983: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jul  1 12:34:42.990: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jul  1 12:34:47.030: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.251:8080/dial?request=hostname&protocol=udp&host=10.244.2.250&port=8081&tries=1'] Namespace:pod-network-test-3661 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 12:34:47.030: INFO: >>> kubeConfig: /root/.kube/config
I0701 12:34:47.061581       7 log.go:172] (0xc0028d31e0) (0xc000c68aa0) Create stream
I0701 12:34:47.061618       7 log.go:172] (0xc0028d31e0) (0xc000c68aa0) Stream added, broadcasting: 1
I0701 12:34:47.064082       7 log.go:172] (0xc0028d31e0) Reply frame received for 1
I0701 12:34:47.064124       7 log.go:172] (0xc0028d31e0) (0xc000c68be0) Create stream
I0701 12:34:47.064140       7 log.go:172] (0xc0028d31e0) (0xc000c68be0) Stream added, broadcasting: 3
I0701 12:34:47.065297       7 log.go:172] (0xc0028d31e0) Reply frame received for 3
I0701 12:34:47.065322       7 log.go:172] (0xc0028d31e0) (0xc0016c1a40) Create stream
I0701 12:34:47.065338       7 log.go:172] (0xc0028d31e0) (0xc0016c1a40) Stream added, broadcasting: 5
I0701 12:34:47.066179       7 log.go:172] (0xc0028d31e0) Reply frame received for 5
I0701 12:34:47.183643       7 log.go:172] (0xc0028d31e0) Data frame received for 3
I0701 12:34:47.183710       7 log.go:172] (0xc000c68be0) (3) Data frame handling
I0701 12:34:47.183995       7 log.go:172] (0xc000c68be0) (3) Data frame sent
I0701 12:34:47.184685       7 log.go:172] (0xc0028d31e0) Data frame received for 5
I0701 12:34:47.184728       7 log.go:172] (0xc0016c1a40) (5) Data frame handling
I0701 12:34:47.184929       7 log.go:172] (0xc0028d31e0) Data frame received for 3
I0701 12:34:47.184952       7 log.go:172] (0xc000c68be0) (3) Data frame handling
I0701 12:34:47.187112       7 log.go:172] (0xc0028d31e0) Data frame received for 1
I0701 12:34:47.187149       7 log.go:172] (0xc000c68aa0) (1) Data frame handling
I0701 12:34:47.187181       7 log.go:172] (0xc000c68aa0) (1) Data frame sent
I0701 12:34:47.187224       7 log.go:172] (0xc0028d31e0) (0xc000c68aa0) Stream removed, broadcasting: 1
I0701 12:34:47.187266       7 log.go:172] (0xc0028d31e0) Go away received
I0701 12:34:47.187381       7 log.go:172] (0xc0028d31e0) (0xc000c68aa0) Stream removed, broadcasting: 1
I0701 12:34:47.187408       7 log.go:172] (0xc0028d31e0) (0xc000c68be0) Stream removed, broadcasting: 3
I0701 12:34:47.187438       7 log.go:172] (0xc0028d31e0) (0xc0016c1a40) Stream removed, broadcasting: 5
Jul  1 12:34:47.187: INFO: Waiting for responses: map[]
Jul  1 12:34:47.191: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.251:8080/dial?request=hostname&protocol=udp&host=10.244.1.10&port=8081&tries=1'] Namespace:pod-network-test-3661 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 12:34:47.191: INFO: >>> kubeConfig: /root/.kube/config
I0701 12:34:47.226378       7 log.go:172] (0xc002caa580) (0xc0022461e0) Create stream
I0701 12:34:47.226406       7 log.go:172] (0xc002caa580) (0xc0022461e0) Stream added, broadcasting: 1
I0701 12:34:47.228168       7 log.go:172] (0xc002caa580) Reply frame received for 1
I0701 12:34:47.228192       7 log.go:172] (0xc002caa580) (0xc000c68c80) Create stream
I0701 12:34:47.228201       7 log.go:172] (0xc002caa580) (0xc000c68c80) Stream added, broadcasting: 3
I0701 12:34:47.228895       7 log.go:172] (0xc002caa580) Reply frame received for 3
I0701 12:34:47.228924       7 log.go:172] (0xc002caa580) (0xc000c69040) Create stream
I0701 12:34:47.228939       7 log.go:172] (0xc002caa580) (0xc000c69040) Stream added, broadcasting: 5
I0701 12:34:47.229677       7 log.go:172] (0xc002caa580) Reply frame received for 5
I0701 12:34:47.304265       7 log.go:172] (0xc002caa580) Data frame received for 3
I0701 12:34:47.304295       7 log.go:172] (0xc000c68c80) (3) Data frame handling
I0701 12:34:47.304316       7 log.go:172] (0xc000c68c80) (3) Data frame sent
I0701 12:34:47.305033       7 log.go:172] (0xc002caa580) Data frame received for 5
I0701 12:34:47.305052       7 log.go:172] (0xc000c69040) (5) Data frame handling
I0701 12:34:47.305348       7 log.go:172] (0xc002caa580) Data frame received for 3
I0701 12:34:47.305393       7 log.go:172] (0xc000c68c80) (3) Data frame handling
I0701 12:34:47.306868       7 log.go:172] (0xc002caa580) Data frame received for 1
I0701 12:34:47.306935       7 log.go:172] (0xc0022461e0) (1) Data frame handling
I0701 12:34:47.306977       7 log.go:172] (0xc0022461e0) (1) Data frame sent
I0701 12:34:47.307114       7 log.go:172] (0xc002caa580) (0xc0022461e0) Stream removed, broadcasting: 1
I0701 12:34:47.307236       7 log.go:172] (0xc002caa580) (0xc0022461e0) Stream removed, broadcasting: 1
I0701 12:34:47.307250       7 log.go:172] (0xc002caa580) (0xc000c68c80) Stream removed, broadcasting: 3
I0701 12:34:47.307383       7 log.go:172] (0xc002caa580) (0xc000c69040) Stream removed, broadcasting: 5
Jul  1 12:34:47.307: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:34:47.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0701 12:34:47.307989       7 log.go:172] (0xc002caa580) Go away received
STEP: Destroying namespace "pod-network-test-3661" for this suite.

• [SLOW TEST:26.761 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":243,"skipped":4110,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:34:47.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jul  1 12:34:47.392: INFO: Waiting up to 5m0s for pod "downward-api-e83352ff-d2ff-4d12-9e20-b27279aea08f" in namespace "downward-api-9072" to be "Succeeded or Failed"
Jul  1 12:34:47.445: INFO: Pod "downward-api-e83352ff-d2ff-4d12-9e20-b27279aea08f": Phase="Pending", Reason="", readiness=false. Elapsed: 52.892541ms
Jul  1 12:34:49.631: INFO: Pod "downward-api-e83352ff-d2ff-4d12-9e20-b27279aea08f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23811296s
Jul  1 12:34:51.634: INFO: Pod "downward-api-e83352ff-d2ff-4d12-9e20-b27279aea08f": Phase="Running", Reason="", readiness=true. Elapsed: 4.24147487s
Jul  1 12:34:53.775: INFO: Pod "downward-api-e83352ff-d2ff-4d12-9e20-b27279aea08f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.382838966s
STEP: Saw pod success
Jul  1 12:34:53.775: INFO: Pod "downward-api-e83352ff-d2ff-4d12-9e20-b27279aea08f" satisfied condition "Succeeded or Failed"
Jul  1 12:34:53.779: INFO: Trying to get logs from node kali-worker2 pod downward-api-e83352ff-d2ff-4d12-9e20-b27279aea08f container dapi-container: 
STEP: delete the pod
Jul  1 12:34:53.825: INFO: Waiting for pod downward-api-e83352ff-d2ff-4d12-9e20-b27279aea08f to disappear
Jul  1 12:34:53.841: INFO: Pod downward-api-e83352ff-d2ff-4d12-9e20-b27279aea08f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:34:53.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9072" for this suite.

• [SLOW TEST:6.535 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":244,"skipped":4127,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:34:53.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Jul  1 12:35:01.651: INFO: Successfully updated pod "labelsupdate10a4e63e-2a1b-456e-b833-3c9c2d52bab1"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:35:03.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1008" for this suite.

• [SLOW TEST:10.403 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":245,"skipped":4132,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:35:04.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul  1 12:35:04.539: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e040f129-d7cd-4cda-a18a-e4a24ece8c45" in namespace "projected-5665" to be "Succeeded or Failed"
Jul  1 12:35:04.703: INFO: Pod "downwardapi-volume-e040f129-d7cd-4cda-a18a-e4a24ece8c45": Phase="Pending", Reason="", readiness=false. Elapsed: 164.098106ms
Jul  1 12:35:07.177: INFO: Pod "downwardapi-volume-e040f129-d7cd-4cda-a18a-e4a24ece8c45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.637590715s
Jul  1 12:35:09.180: INFO: Pod "downwardapi-volume-e040f129-d7cd-4cda-a18a-e4a24ece8c45": Phase="Pending", Reason="", readiness=false. Elapsed: 4.641359286s
Jul  1 12:35:11.218: INFO: Pod "downwardapi-volume-e040f129-d7cd-4cda-a18a-e4a24ece8c45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.678590078s
STEP: Saw pod success
Jul  1 12:35:11.218: INFO: Pod "downwardapi-volume-e040f129-d7cd-4cda-a18a-e4a24ece8c45" satisfied condition "Succeeded or Failed"
Jul  1 12:35:11.220: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-e040f129-d7cd-4cda-a18a-e4a24ece8c45 container client-container: 
STEP: delete the pod
Jul  1 12:35:11.616: INFO: Waiting for pod downwardapi-volume-e040f129-d7cd-4cda-a18a-e4a24ece8c45 to disappear
Jul  1 12:35:11.674: INFO: Pod downwardapi-volume-e040f129-d7cd-4cda-a18a-e4a24ece8c45 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:35:11.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5665" for this suite.

• [SLOW TEST:7.439 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4140,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:35:11.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-9480
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9480 to expose endpoints map[]
Jul  1 12:35:11.913: INFO: Get endpoints failed (53.244849ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jul  1 12:35:12.917: INFO: successfully validated that service multi-endpoint-test in namespace services-9480 exposes endpoints map[] (1.057266783s elapsed)
STEP: Creating pod pod1 in namespace services-9480
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9480 to expose endpoints map[pod1:[100]]
Jul  1 12:35:17.399: INFO: successfully validated that service multi-endpoint-test in namespace services-9480 exposes endpoints map[pod1:[100]] (4.47532155s elapsed)
STEP: Creating pod pod2 in namespace services-9480
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9480 to expose endpoints map[pod1:[100] pod2:[101]]
Jul  1 12:35:21.844: INFO: Unexpected endpoints: found map[752dcc12-ed0a-41f6-8a99-5ec9977e787d:[100]], expected map[pod1:[100] pod2:[101]] (4.44145354s elapsed, will retry)
Jul  1 12:35:22.854: INFO: successfully validated that service multi-endpoint-test in namespace services-9480 exposes endpoints map[pod1:[100] pod2:[101]] (5.450644775s elapsed)
STEP: Deleting pod pod1 in namespace services-9480
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9480 to expose endpoints map[pod2:[101]]
Jul  1 12:35:23.944: INFO: successfully validated that service multi-endpoint-test in namespace services-9480 exposes endpoints map[pod2:[101]] (1.086326125s elapsed)
STEP: Deleting pod pod2 in namespace services-9480
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9480 to expose endpoints map[]
Jul  1 12:35:25.135: INFO: successfully validated that service multi-endpoint-test in namespace services-9480 exposes endpoints map[] (1.186620466s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:35:25.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9480" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:13.963 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":275,"completed":247,"skipped":4143,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:35:25.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Jul  1 12:35:25.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Jul  1 12:35:37.186: INFO: >>> kubeConfig: /root/.kube/config
Jul  1 12:35:40.153: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:35:50.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9951" for this suite.

• [SLOW TEST:25.298 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":248,"skipped":4165,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:35:50.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:36:20.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7547" for this suite.

• [SLOW TEST:29.107 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":249,"skipped":4188,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:36:20.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul  1 12:36:20.217: INFO: Waiting up to 5m0s for pod "pod-6ab9ae5c-644b-4e4f-8ea7-f72b00ddcab0" in namespace "emptydir-5531" to be "Succeeded or Failed"
Jul  1 12:36:20.266: INFO: Pod "pod-6ab9ae5c-644b-4e4f-8ea7-f72b00ddcab0": Phase="Pending", Reason="", readiness=false. Elapsed: 49.363442ms
Jul  1 12:36:22.288: INFO: Pod "pod-6ab9ae5c-644b-4e4f-8ea7-f72b00ddcab0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070964408s
Jul  1 12:36:24.461: INFO: Pod "pod-6ab9ae5c-644b-4e4f-8ea7-f72b00ddcab0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.244421025s
Jul  1 12:36:26.691: INFO: Pod "pod-6ab9ae5c-644b-4e4f-8ea7-f72b00ddcab0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.474431024s
STEP: Saw pod success
Jul  1 12:36:26.691: INFO: Pod "pod-6ab9ae5c-644b-4e4f-8ea7-f72b00ddcab0" satisfied condition "Succeeded or Failed"
Jul  1 12:36:26.694: INFO: Trying to get logs from node kali-worker2 pod pod-6ab9ae5c-644b-4e4f-8ea7-f72b00ddcab0 container test-container: 
STEP: delete the pod
Jul  1 12:36:26.958: INFO: Waiting for pod pod-6ab9ae5c-644b-4e4f-8ea7-f72b00ddcab0 to disappear
Jul  1 12:36:26.968: INFO: Pod pod-6ab9ae5c-644b-4e4f-8ea7-f72b00ddcab0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:36:26.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5531" for this suite.

• [SLOW TEST:6.912 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4201,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:36:26.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:36:39.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1124" for this suite.

• [SLOW TEST:12.056 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":251,"skipped":4255,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:36:39.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  1 12:36:39.696: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  1 12:36:41.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203799, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203799, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203799, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203799, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  1 12:36:45.022: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Jul  1 12:36:51.148: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config attach --namespace=webhook-8492 to-be-attached-pod -i -c=container1'
Jul  1 12:36:51.281: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:36:51.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8492" for this suite.
STEP: Destroying namespace "webhook-8492-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.393 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":252,"skipped":4257,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:36:51.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-1066/configmap-test-155665b0-1020-4b3f-8a00-bd461ebff8fc
STEP: Creating a pod to test consume configMaps
Jul  1 12:36:51.498: INFO: Waiting up to 5m0s for pod "pod-configmaps-ec215902-1841-4161-b430-b0b34a0a8025" in namespace "configmap-1066" to be "Succeeded or Failed"
Jul  1 12:36:51.502: INFO: Pod "pod-configmaps-ec215902-1841-4161-b430-b0b34a0a8025": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161713ms
Jul  1 12:36:53.611: INFO: Pod "pod-configmaps-ec215902-1841-4161-b430-b0b34a0a8025": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113056198s
Jul  1 12:36:55.615: INFO: Pod "pod-configmaps-ec215902-1841-4161-b430-b0b34a0a8025": Phase="Running", Reason="", readiness=true. Elapsed: 4.116736856s
Jul  1 12:36:57.618: INFO: Pod "pod-configmaps-ec215902-1841-4161-b430-b0b34a0a8025": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.119814074s
STEP: Saw pod success
Jul  1 12:36:57.618: INFO: Pod "pod-configmaps-ec215902-1841-4161-b430-b0b34a0a8025" satisfied condition "Succeeded or Failed"
Jul  1 12:36:57.620: INFO: Trying to get logs from node kali-worker pod pod-configmaps-ec215902-1841-4161-b430-b0b34a0a8025 container env-test: 
STEP: delete the pod
Jul  1 12:36:57.686: INFO: Waiting for pod pod-configmaps-ec215902-1841-4161-b430-b0b34a0a8025 to disappear
Jul  1 12:36:57.697: INFO: Pod pod-configmaps-ec215902-1841-4161-b430-b0b34a0a8025 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:36:57.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1066" for this suite.

• [SLOW TEST:6.279 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4279,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:36:57.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-9fe7346e-3867-4048-aa80-d6f9673c682d
STEP: Creating a pod to test consume configMaps
Jul  1 12:36:58.442: INFO: Waiting up to 5m0s for pod "pod-configmaps-a289bc0a-a65e-470d-94c7-4bace10498f3" in namespace "configmap-7688" to be "Succeeded or Failed"
Jul  1 12:36:58.500: INFO: Pod "pod-configmaps-a289bc0a-a65e-470d-94c7-4bace10498f3": Phase="Pending", Reason="", readiness=false. Elapsed: 58.535047ms
Jul  1 12:37:00.548: INFO: Pod "pod-configmaps-a289bc0a-a65e-470d-94c7-4bace10498f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106109138s
Jul  1 12:37:02.620: INFO: Pod "pod-configmaps-a289bc0a-a65e-470d-94c7-4bace10498f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.178580268s
STEP: Saw pod success
Jul  1 12:37:02.620: INFO: Pod "pod-configmaps-a289bc0a-a65e-470d-94c7-4bace10498f3" satisfied condition "Succeeded or Failed"
Jul  1 12:37:02.624: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-a289bc0a-a65e-470d-94c7-4bace10498f3 container configmap-volume-test: 
STEP: delete the pod
Jul  1 12:37:02.767: INFO: Waiting for pod pod-configmaps-a289bc0a-a65e-470d-94c7-4bace10498f3 to disappear
Jul  1 12:37:02.943: INFO: Pod pod-configmaps-a289bc0a-a65e-470d-94c7-4bace10498f3 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:37:02.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7688" for this suite.

• [SLOW TEST:5.255 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4322,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:37:02.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a secret
STEP: listing secrets in all namespaces to ensure that there are more than zero
STEP: patching the secret
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:37:03.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5340" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":255,"skipped":4364,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:37:03.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul  1 12:37:12.253: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:37:12.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8628" for this suite.

• [SLOW TEST:8.657 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":256,"skipped":4384,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:37:12.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-2feb413e-0870-416e-9448-f2a5b782937a
STEP: Creating a pod to test consume secrets
Jul  1 12:37:12.490: INFO: Waiting up to 5m0s for pod "pod-secrets-67dd036c-0cbc-4b90-a4e6-b5c82b13cada" in namespace "secrets-9666" to be "Succeeded or Failed"
Jul  1 12:37:12.530: INFO: Pod "pod-secrets-67dd036c-0cbc-4b90-a4e6-b5c82b13cada": Phase="Pending", Reason="", readiness=false. Elapsed: 40.00717ms
Jul  1 12:37:14.534: INFO: Pod "pod-secrets-67dd036c-0cbc-4b90-a4e6-b5c82b13cada": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043399936s
Jul  1 12:37:16.597: INFO: Pod "pod-secrets-67dd036c-0cbc-4b90-a4e6-b5c82b13cada": Phase="Running", Reason="", readiness=true. Elapsed: 4.107208729s
Jul  1 12:37:18.608: INFO: Pod "pod-secrets-67dd036c-0cbc-4b90-a4e6-b5c82b13cada": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117569918s
STEP: Saw pod success
Jul  1 12:37:18.608: INFO: Pod "pod-secrets-67dd036c-0cbc-4b90-a4e6-b5c82b13cada" satisfied condition "Succeeded or Failed"
Jul  1 12:37:18.612: INFO: Trying to get logs from node kali-worker pod pod-secrets-67dd036c-0cbc-4b90-a4e6-b5c82b13cada container secret-volume-test: 
STEP: delete the pod
Jul  1 12:37:18.801: INFO: Waiting for pod pod-secrets-67dd036c-0cbc-4b90-a4e6-b5c82b13cada to disappear
Jul  1 12:37:18.890: INFO: Pod pod-secrets-67dd036c-0cbc-4b90-a4e6-b5c82b13cada no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:37:18.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9666" for this suite.

• [SLOW TEST:6.629 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4403,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:37:18.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-eed5f4ee-689e-434b-8791-66ea4182a556
STEP: Creating a pod to test consume secrets
Jul  1 12:37:19.061: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b7b06e2a-d038-4cf0-a19e-cadaab00abf7" in namespace "projected-5002" to be "Succeeded or Failed"
Jul  1 12:37:19.069: INFO: Pod "pod-projected-secrets-b7b06e2a-d038-4cf0-a19e-cadaab00abf7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.331257ms
Jul  1 12:37:21.074: INFO: Pod "pod-projected-secrets-b7b06e2a-d038-4cf0-a19e-cadaab00abf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012922912s
Jul  1 12:37:23.078: INFO: Pod "pod-projected-secrets-b7b06e2a-d038-4cf0-a19e-cadaab00abf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017603181s
STEP: Saw pod success
Jul  1 12:37:23.078: INFO: Pod "pod-projected-secrets-b7b06e2a-d038-4cf0-a19e-cadaab00abf7" satisfied condition "Succeeded or Failed"
Jul  1 12:37:23.082: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-b7b06e2a-d038-4cf0-a19e-cadaab00abf7 container projected-secret-volume-test: 
STEP: delete the pod
Jul  1 12:37:23.119: INFO: Waiting for pod pod-projected-secrets-b7b06e2a-d038-4cf0-a19e-cadaab00abf7 to disappear
Jul  1 12:37:23.127: INFO: Pod pod-projected-secrets-b7b06e2a-d038-4cf0-a19e-cadaab00abf7 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:37:23.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5002" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":258,"skipped":4420,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:37:23.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Jul  1 12:37:27.866: INFO: Successfully updated pod "labelsupdate7691e5ac-3889-421f-8316-a6dca8a84a64"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:37:30.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-420" for this suite.

• [SLOW TEST:6.999 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4428,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:37:30.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul  1 12:37:44.919: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  1 12:37:44.988: INFO: Pod pod-with-prestop-http-hook still exists
Jul  1 12:37:46.989: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  1 12:37:46.992: INFO: Pod pod-with-prestop-http-hook still exists
Jul  1 12:37:48.989: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  1 12:37:48.993: INFO: Pod pod-with-prestop-http-hook still exists
Jul  1 12:37:50.989: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  1 12:37:51.118: INFO: Pod pod-with-prestop-http-hook still exists
Jul  1 12:37:52.989: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  1 12:37:52.993: INFO: Pod pod-with-prestop-http-hook still exists
Jul  1 12:37:54.989: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  1 12:37:54.993: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:37:54.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6466" for this suite.

• [SLOW TEST:24.848 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4444,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:37:55.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 12:37:55.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:37:59.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1069" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4498,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:37:59.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3276.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3276.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3276.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3276.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3276.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3276.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3276.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3276.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3276.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3276.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3276.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 176.85.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.85.176_udp@PTR;check="$$(dig +tcp +noall +answer +search 176.85.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.85.176_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3276.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3276.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3276.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3276.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3276.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3276.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3276.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3276.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3276.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3276.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3276.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 176.85.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.85.176_udp@PTR;check="$$(dig +tcp +noall +answer +search 176.85.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.85.176_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  1 12:38:07.625: INFO: Unable to read wheezy_udp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:07.650: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:07.831: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:07.890: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:08.073: INFO: Unable to read jessie_udp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:08.076: INFO: Unable to read jessie_tcp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:08.079: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:08.081: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:08.146: INFO: Lookups using dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a failed for: [wheezy_udp@dns-test-service.dns-3276.svc.cluster.local wheezy_tcp@dns-test-service.dns-3276.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local jessie_udp@dns-test-service.dns-3276.svc.cluster.local jessie_tcp@dns-test-service.dns-3276.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local]

Jul  1 12:38:13.150: INFO: Unable to read wheezy_udp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:13.154: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:13.157: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:13.168: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:13.190: INFO: Unable to read jessie_udp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:13.193: INFO: Unable to read jessie_tcp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:13.196: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:13.199: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:13.292: INFO: Lookups using dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a failed for: [wheezy_udp@dns-test-service.dns-3276.svc.cluster.local wheezy_tcp@dns-test-service.dns-3276.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local jessie_udp@dns-test-service.dns-3276.svc.cluster.local jessie_tcp@dns-test-service.dns-3276.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local]

Jul  1 12:38:18.151: INFO: Unable to read wheezy_udp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:18.155: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:18.158: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:18.161: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:18.182: INFO: Unable to read jessie_udp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:18.185: INFO: Unable to read jessie_tcp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:18.188: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:18.191: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:18.211: INFO: Lookups using dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a failed for: [wheezy_udp@dns-test-service.dns-3276.svc.cluster.local wheezy_tcp@dns-test-service.dns-3276.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local jessie_udp@dns-test-service.dns-3276.svc.cluster.local jessie_tcp@dns-test-service.dns-3276.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local]

Jul  1 12:38:23.152: INFO: Unable to read wheezy_udp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:23.334: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:23.338: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:23.341: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:23.358: INFO: Unable to read jessie_udp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:23.360: INFO: Unable to read jessie_tcp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:23.363: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:23.365: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:23.381: INFO: Lookups using dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a failed for: [wheezy_udp@dns-test-service.dns-3276.svc.cluster.local wheezy_tcp@dns-test-service.dns-3276.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local jessie_udp@dns-test-service.dns-3276.svc.cluster.local jessie_tcp@dns-test-service.dns-3276.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local]

Jul  1 12:38:28.152: INFO: Unable to read wheezy_udp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:28.155: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:28.159: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:28.162: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:28.183: INFO: Unable to read jessie_udp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:28.186: INFO: Unable to read jessie_tcp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:28.190: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:28.193: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:28.301: INFO: Lookups using dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a failed for: [wheezy_udp@dns-test-service.dns-3276.svc.cluster.local wheezy_tcp@dns-test-service.dns-3276.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local jessie_udp@dns-test-service.dns-3276.svc.cluster.local jessie_tcp@dns-test-service.dns-3276.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local]

Jul  1 12:38:33.178: INFO: Unable to read wheezy_udp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:33.182: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:33.222: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:33.226: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:33.248: INFO: Unable to read jessie_udp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:33.251: INFO: Unable to read jessie_tcp@dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:33.254: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:33.257: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local from pod dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a: the server could not find the requested resource (get pods dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a)
Jul  1 12:38:33.304: INFO: Lookups using dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a failed for: [wheezy_udp@dns-test-service.dns-3276.svc.cluster.local wheezy_tcp@dns-test-service.dns-3276.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local jessie_udp@dns-test-service.dns-3276.svc.cluster.local jessie_tcp@dns-test-service.dns-3276.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3276.svc.cluster.local]

Jul  1 12:38:38.226: INFO: DNS probes using dns-3276/dns-test-af0613dc-8b81-4cce-9cb4-8d4e83955e0a succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:38:39.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3276" for this suite.

• [SLOW TEST:39.768 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":275,"completed":262,"skipped":4500,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:38:39.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  1 12:38:39.674: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  1 12:38:41.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203919, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203919, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203919, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203919, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:38:44.011: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203919, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203919, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203919, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203919, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  1 12:38:47.387: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:38:48.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1680" for this suite.
STEP: Destroying namespace "webhook-1680-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.124 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":263,"skipped":4512,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:38:48.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  1 12:38:49.331: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  1 12:38:51.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203929, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203929, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203929, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203929, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:38:53.423: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203929, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203929, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203929, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729203929, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  1 12:38:56.475: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:39:06.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3764" for this suite.
STEP: Destroying namespace "webhook-3764-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.664 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":264,"skipped":4542,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:39:06.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9915
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating stateful set ss in namespace statefulset-9915
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9915
Jul  1 12:39:07.088: INFO: Found 0 stateful pods, waiting for 1
Jul  1 12:39:17.322: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jul  1 12:39:17.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  1 12:39:25.874: INFO: stderr: "I0701 12:39:25.741730    3272 log.go:172] (0xc000b2e840) (0xc0009541e0) Create stream\nI0701 12:39:25.741776    3272 log.go:172] (0xc000b2e840) (0xc0009541e0) Stream added, broadcasting: 1\nI0701 12:39:25.753335    3272 log.go:172] (0xc000b2e840) Reply frame received for 1\nI0701 12:39:25.753396    3272 log.go:172] (0xc000b2e840) (0xc0008f20a0) Create stream\nI0701 12:39:25.753410    3272 log.go:172] (0xc000b2e840) (0xc0008f20a0) Stream added, broadcasting: 3\nI0701 12:39:25.757607    3272 log.go:172] (0xc000b2e840) Reply frame received for 3\nI0701 12:39:25.757645    3272 log.go:172] (0xc000b2e840) (0xc00078b4a0) Create stream\nI0701 12:39:25.757655    3272 log.go:172] (0xc000b2e840) (0xc00078b4a0) Stream added, broadcasting: 5\nI0701 12:39:25.758621    3272 log.go:172] (0xc000b2e840) Reply frame received for 5\nI0701 12:39:25.827388    3272 log.go:172] (0xc000b2e840) Data frame received for 5\nI0701 12:39:25.827412    3272 log.go:172] (0xc00078b4a0) (5) Data frame handling\nI0701 12:39:25.827424    3272 log.go:172] (0xc00078b4a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 12:39:25.863267    3272 log.go:172] (0xc000b2e840) Data frame received for 3\nI0701 12:39:25.863308    3272 log.go:172] (0xc0008f20a0) (3) Data frame handling\nI0701 12:39:25.863320    3272 log.go:172] (0xc0008f20a0) (3) Data frame sent\nI0701 12:39:25.863330    3272 log.go:172] (0xc000b2e840) Data frame received for 3\nI0701 12:39:25.863340    3272 log.go:172] (0xc0008f20a0) (3) Data frame handling\nI0701 12:39:25.863623    3272 log.go:172] (0xc000b2e840) Data frame received for 5\nI0701 12:39:25.863642    3272 log.go:172] (0xc00078b4a0) (5) Data frame handling\nI0701 12:39:25.866056    3272 log.go:172] (0xc000b2e840) Data frame received for 1\nI0701 12:39:25.866106    3272 log.go:172] (0xc0009541e0) (1) Data frame handling\nI0701 12:39:25.866159    3272 log.go:172] (0xc0009541e0) (1) Data frame sent\nI0701 12:39:25.866202    3272 log.go:172] (0xc000b2e840) (0xc0009541e0) Stream removed, broadcasting: 1\nI0701 12:39:25.866237    3272 log.go:172] (0xc000b2e840) Go away received\nI0701 12:39:25.866811    3272 log.go:172] (0xc000b2e840) (0xc0009541e0) Stream removed, broadcasting: 1\nI0701 12:39:25.866841    3272 log.go:172] (0xc000b2e840) (0xc0008f20a0) Stream removed, broadcasting: 3\nI0701 12:39:25.866860    3272 log.go:172] (0xc000b2e840) (0xc00078b4a0) Stream removed, broadcasting: 5\n"
Jul  1 12:39:25.874: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  1 12:39:25.874: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  1 12:39:25.878: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul  1 12:39:35.883: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  1 12:39:35.883: INFO: Waiting for statefulset status.replicas updated to 0
Jul  1 12:39:35.945: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
Jul  1 12:39:35.945: INFO: ss-0  kali-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  }]
Jul  1 12:39:35.945: INFO: 
Jul  1 12:39:35.945: INFO: StatefulSet ss has not reached scale 3, at 1
Jul  1 12:39:36.951: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.947920597s
Jul  1 12:39:38.204: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.942600031s
Jul  1 12:39:40.487: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.688834227s
Jul  1 12:39:42.162: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.406121118s
Jul  1 12:39:43.400: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.73149107s
Jul  1 12:39:44.405: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.492909643s
Jul  1 12:39:45.410: INFO: Verifying statefulset ss doesn't scale past 3 for another 488.434842ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9915
Jul  1 12:39:46.716: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:39:47.868: INFO: stderr: "I0701 12:39:47.786137    3305 log.go:172] (0xc00003aa50) (0xc0006ab540) Create stream\nI0701 12:39:47.786197    3305 log.go:172] (0xc00003aa50) (0xc0006ab540) Stream added, broadcasting: 1\nI0701 12:39:47.788206    3305 log.go:172] (0xc00003aa50) Reply frame received for 1\nI0701 12:39:47.788245    3305 log.go:172] (0xc00003aa50) (0xc000966000) Create stream\nI0701 12:39:47.788264    3305 log.go:172] (0xc00003aa50) (0xc000966000) Stream added, broadcasting: 3\nI0701 12:39:47.789372    3305 log.go:172] (0xc00003aa50) Reply frame received for 3\nI0701 12:39:47.789417    3305 log.go:172] (0xc00003aa50) (0xc0006ab5e0) Create stream\nI0701 12:39:47.789430    3305 log.go:172] (0xc00003aa50) (0xc0006ab5e0) Stream added, broadcasting: 5\nI0701 12:39:47.790215    3305 log.go:172] (0xc00003aa50) Reply frame received for 5\nI0701 12:39:47.851662    3305 log.go:172] (0xc00003aa50) Data frame received for 5\nI0701 12:39:47.851758    3305 log.go:172] (0xc0006ab5e0) (5) Data frame handling\nI0701 12:39:47.851786    3305 log.go:172] (0xc0006ab5e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 12:39:47.859742    3305 log.go:172] (0xc00003aa50) Data frame received for 5\nI0701 12:39:47.859774    3305 log.go:172] (0xc0006ab5e0) (5) Data frame handling\nI0701 12:39:47.859803    3305 log.go:172] (0xc00003aa50) Data frame received for 3\nI0701 12:39:47.859824    3305 log.go:172] (0xc000966000) (3) Data frame handling\nI0701 12:39:47.859850    3305 log.go:172] (0xc000966000) (3) Data frame sent\nI0701 12:39:47.859860    3305 log.go:172] (0xc00003aa50) Data frame received for 3\nI0701 12:39:47.859868    3305 log.go:172] (0xc000966000) (3) Data frame handling\nI0701 12:39:47.862903    3305 log.go:172] (0xc00003aa50) Data frame received for 1\nI0701 12:39:47.862921    3305 log.go:172] (0xc0006ab540) (1) Data frame handling\nI0701 12:39:47.862932    3305 log.go:172] (0xc0006ab540) (1) Data frame sent\nI0701 12:39:47.862944    3305 log.go:172] (0xc00003aa50) (0xc0006ab540) Stream removed, broadcasting: 1\nI0701 12:39:47.863159    3305 log.go:172] (0xc00003aa50) (0xc0006ab540) Stream removed, broadcasting: 1\nI0701 12:39:47.863176    3305 log.go:172] (0xc00003aa50) (0xc000966000) Stream removed, broadcasting: 3\nI0701 12:39:47.863187    3305 log.go:172] (0xc00003aa50) (0xc0006ab5e0) Stream removed, broadcasting: 5\n"
Jul  1 12:39:47.868: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  1 12:39:47.868: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  1 12:39:47.868: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:39:48.062: INFO: stderr: "I0701 12:39:47.990346    3325 log.go:172] (0xc0008322c0) (0xc000732140) Create stream\nI0701 12:39:47.990396    3325 log.go:172] (0xc0008322c0) (0xc000732140) Stream added, broadcasting: 1\nI0701 12:39:47.993100    3325 log.go:172] (0xc0008322c0) Reply frame received for 1\nI0701 12:39:47.993320    3325 log.go:172] (0xc0008322c0) (0xc00077c000) Create stream\nI0701 12:39:47.993345    3325 log.go:172] (0xc0008322c0) (0xc00077c000) Stream added, broadcasting: 3\nI0701 12:39:47.994352    3325 log.go:172] (0xc0008322c0) Reply frame received for 3\nI0701 12:39:47.994380    3325 log.go:172] (0xc0008322c0) (0xc00077c0a0) Create stream\nI0701 12:39:47.994390    3325 log.go:172] (0xc0008322c0) (0xc00077c0a0) Stream added, broadcasting: 5\nI0701 12:39:47.995255    3325 log.go:172] (0xc0008322c0) Reply frame received for 5\nI0701 12:39:48.054259    3325 log.go:172] (0xc0008322c0) Data frame received for 5\nI0701 12:39:48.054288    3325 log.go:172] (0xc00077c0a0) (5) Data frame handling\nI0701 12:39:48.054296    3325 log.go:172] (0xc00077c0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0701 12:39:48.054319    3325 log.go:172] (0xc0008322c0) Data frame received for 3\nI0701 12:39:48.054382    3325 log.go:172] (0xc00077c000) (3) Data frame handling\nI0701 12:39:48.054410    3325 log.go:172] (0xc00077c000) (3) Data frame sent\nI0701 12:39:48.054427    3325 log.go:172] (0xc0008322c0) Data frame received for 3\nI0701 12:39:48.054442    3325 log.go:172] (0xc00077c000) (3) Data frame handling\nI0701 12:39:48.054483    3325 log.go:172] (0xc0008322c0) Data frame received for 5\nI0701 12:39:48.054496    3325 log.go:172] (0xc00077c0a0) (5) Data frame handling\nI0701 12:39:48.055735    3325 log.go:172] (0xc0008322c0) Data frame received for 1\nI0701 12:39:48.055753    3325 log.go:172] (0xc000732140) (1) Data frame handling\nI0701 12:39:48.055763    3325 log.go:172] (0xc000732140) (1) Data frame sent\nI0701 12:39:48.055878    3325 log.go:172] (0xc0008322c0) (0xc000732140) Stream removed, broadcasting: 1\nI0701 12:39:48.055900    3325 log.go:172] (0xc0008322c0) Go away received\nI0701 12:39:48.056289    3325 log.go:172] (0xc0008322c0) (0xc000732140) Stream removed, broadcasting: 1\nI0701 12:39:48.056312    3325 log.go:172] (0xc0008322c0) (0xc00077c000) Stream removed, broadcasting: 3\nI0701 12:39:48.056339    3325 log.go:172] (0xc0008322c0) (0xc00077c0a0) Stream removed, broadcasting: 5\n"
Jul  1 12:39:48.063: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  1 12:39:48.063: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  1 12:39:48.063: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:39:48.424: INFO: stderr: "I0701 12:39:48.337525    3338 log.go:172] (0xc0009f28f0) (0xc0006bb680) Create stream\nI0701 12:39:48.337580    3338 log.go:172] (0xc0009f28f0) (0xc0006bb680) Stream added, broadcasting: 1\nI0701 12:39:48.340044    3338 log.go:172] (0xc0009f28f0) Reply frame received for 1\nI0701 12:39:48.340089    3338 log.go:172] (0xc0009f28f0) (0xc0006bb720) Create stream\nI0701 12:39:48.340101    3338 log.go:172] (0xc0009f28f0) (0xc0006bb720) Stream added, broadcasting: 3\nI0701 12:39:48.340853    3338 log.go:172] (0xc0009f28f0) Reply frame received for 3\nI0701 12:39:48.340883    3338 log.go:172] (0xc0009f28f0) (0xc00065d5e0) Create stream\nI0701 12:39:48.340894    3338 log.go:172] (0xc0009f28f0) (0xc00065d5e0) Stream added, broadcasting: 5\nI0701 12:39:48.341805    3338 log.go:172] (0xc0009f28f0) Reply frame received for 5\nI0701 12:39:48.415661    3338 log.go:172] (0xc0009f28f0) Data frame received for 3\nI0701 12:39:48.415692    3338 log.go:172] (0xc0006bb720) (3) Data frame handling\nI0701 12:39:48.415718    3338 log.go:172] (0xc0006bb720) (3) Data frame sent\nI0701 12:39:48.415728    3338 log.go:172] (0xc0009f28f0) Data frame received for 3\nI0701 12:39:48.415739    3338 log.go:172] (0xc0006bb720) (3) Data frame handling\nI0701 12:39:48.415894    3338 log.go:172] (0xc0009f28f0) Data frame received for 5\nI0701 12:39:48.415922    3338 log.go:172] (0xc00065d5e0) (5) Data frame handling\nI0701 12:39:48.415944    3338 log.go:172] (0xc00065d5e0) (5) Data frame sent\nI0701 12:39:48.415956    3338 log.go:172] (0xc0009f28f0) Data frame received for 5\nI0701 12:39:48.415976    3338 log.go:172] (0xc00065d5e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0701 12:39:48.418144    3338 log.go:172] (0xc0009f28f0) Data frame received for 1\nI0701 12:39:48.418181    3338 log.go:172] (0xc0006bb680) (1) Data frame handling\nI0701 12:39:48.418210    3338 log.go:172] (0xc0006bb680) (1) Data frame sent\nI0701 12:39:48.418237    3338 log.go:172] (0xc0009f28f0) (0xc0006bb680) Stream removed, broadcasting: 1\nI0701 12:39:48.418262    3338 log.go:172] (0xc0009f28f0) Go away received\nI0701 12:39:48.418587    3338 log.go:172] (0xc0009f28f0) (0xc0006bb680) Stream removed, broadcasting: 1\nI0701 12:39:48.418602    3338 log.go:172] (0xc0009f28f0) (0xc0006bb720) Stream removed, broadcasting: 3\nI0701 12:39:48.418609    3338 log.go:172] (0xc0009f28f0) (0xc00065d5e0) Stream removed, broadcasting: 5\n"
Jul  1 12:39:48.424: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  1 12:39:48.424: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  1 12:39:48.428: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Jul  1 12:39:58.439: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 12:39:58.439: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 12:39:58.439: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jul  1 12:39:58.443: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  1 12:39:58.648: INFO: stderr: "I0701 12:39:58.572286    3360 log.go:172] (0xc00044db80) (0xc0009ba320) Create stream\nI0701 12:39:58.572345    3360 log.go:172] (0xc00044db80) (0xc0009ba320) Stream added, broadcasting: 1\nI0701 12:39:58.575082    3360 log.go:172] (0xc00044db80) Reply frame received for 1\nI0701 12:39:58.575140    3360 log.go:172] (0xc00044db80) (0xc000509540) Create stream\nI0701 12:39:58.575167    3360 log.go:172] (0xc00044db80) (0xc000509540) Stream added, broadcasting: 3\nI0701 12:39:58.576019    3360 log.go:172] (0xc00044db80) Reply frame received for 3\nI0701 12:39:58.576043    3360 log.go:172] (0xc00044db80) (0xc0009ba3c0) Create stream\nI0701 12:39:58.576050    3360 log.go:172] (0xc00044db80) (0xc0009ba3c0) Stream added, broadcasting: 5\nI0701 12:39:58.576912    3360 log.go:172] (0xc00044db80) Reply frame received for 5\nI0701 12:39:58.639416    3360 log.go:172] (0xc00044db80) Data frame received for 5\nI0701 12:39:58.639462    3360 log.go:172] (0xc0009ba3c0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 12:39:58.639505    3360 log.go:172] (0xc00044db80) Data frame received for 3\nI0701 12:39:58.639543    3360 log.go:172] (0xc000509540) (3) Data frame handling\nI0701 12:39:58.639554    3360 log.go:172] (0xc000509540) (3) Data frame sent\nI0701 12:39:58.639566    3360 log.go:172] (0xc00044db80) Data frame received for 3\nI0701 12:39:58.639573    3360 log.go:172] (0xc000509540) (3) Data frame handling\nI0701 12:39:58.639599    3360 log.go:172] (0xc0009ba3c0) (5) Data frame sent\nI0701 12:39:58.639607    3360 log.go:172] (0xc00044db80) Data frame received for 5\nI0701 12:39:58.639612    3360 log.go:172] (0xc0009ba3c0) (5) Data frame handling\nI0701 12:39:58.641443    3360 log.go:172] (0xc00044db80) Data frame received for 1\nI0701 12:39:58.641458    3360 log.go:172] (0xc0009ba320) (1) Data frame handling\nI0701 12:39:58.641470    3360 log.go:172] (0xc0009ba320) (1) Data frame sent\nI0701 12:39:58.641480    3360 log.go:172] (0xc00044db80) (0xc0009ba320) Stream removed, broadcasting: 1\nI0701 12:39:58.641498    3360 log.go:172] (0xc00044db80) Go away received\nI0701 12:39:58.641930    3360 log.go:172] (0xc00044db80) (0xc0009ba320) Stream removed, broadcasting: 1\nI0701 12:39:58.641954    3360 log.go:172] (0xc00044db80) (0xc000509540) Stream removed, broadcasting: 3\nI0701 12:39:58.641972    3360 log.go:172] (0xc00044db80) (0xc0009ba3c0) Stream removed, broadcasting: 5\n"
Jul  1 12:39:58.648: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  1 12:39:58.648: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  1 12:39:58.648: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  1 12:39:58.942: INFO: stderr: "I0701 12:39:58.787303    3383 log.go:172] (0xc0009688f0) (0xc0005a54a0) Create stream\nI0701 12:39:58.787349    3383 log.go:172] (0xc0009688f0) (0xc0005a54a0) Stream added, broadcasting: 1\nI0701 12:39:58.794004    3383 log.go:172] (0xc0009688f0) Reply frame received for 1\nI0701 12:39:58.794073    3383 log.go:172] (0xc0009688f0) (0xc000ac6000) Create stream\nI0701 12:39:58.794092    3383 log.go:172] (0xc0009688f0) (0xc000ac6000) Stream added, broadcasting: 3\nI0701 12:39:58.799531    3383 log.go:172] (0xc0009688f0) Reply frame received for 3\nI0701 12:39:58.799568    3383 log.go:172] (0xc0009688f0) (0xc000ac60a0) Create stream\nI0701 12:39:58.799584    3383 log.go:172] (0xc0009688f0) (0xc000ac60a0) Stream added, broadcasting: 5\nI0701 12:39:58.802307    3383 log.go:172] (0xc0009688f0) Reply frame received for 5\nI0701 12:39:58.873578    3383 log.go:172] (0xc0009688f0) Data frame received for 5\nI0701 12:39:58.873621    3383 log.go:172] (0xc000ac60a0) (5) Data frame handling\nI0701 12:39:58.873651    3383 log.go:172] (0xc000ac60a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 12:39:58.933953    3383 log.go:172] (0xc0009688f0) Data frame received for 5\nI0701 12:39:58.933998    3383 log.go:172] (0xc0009688f0) Data frame received for 3\nI0701 12:39:58.934262    3383 log.go:172] (0xc000ac6000) (3) Data frame handling\nI0701 12:39:58.934294    3383 log.go:172] (0xc000ac6000) (3) Data frame sent\nI0701 12:39:58.934312    3383 log.go:172] (0xc0009688f0) Data frame received for 3\nI0701 12:39:58.934327    3383 log.go:172] (0xc000ac6000) (3) Data frame handling\nI0701 12:39:58.934386    3383 log.go:172] (0xc000ac60a0) (5) Data frame handling\nI0701 12:39:58.935867    3383 log.go:172] (0xc0009688f0) Data frame received for 1\nI0701 12:39:58.935888    3383 log.go:172] (0xc0005a54a0) (1) Data frame handling\nI0701 12:39:58.935900    3383 log.go:172] (0xc0005a54a0) (1) Data frame sent\nI0701 12:39:58.935910    3383 log.go:172] (0xc0009688f0) (0xc0005a54a0) Stream removed, broadcasting: 1\nI0701 12:39:58.935921    3383 log.go:172] (0xc0009688f0) Go away received\nI0701 12:39:58.936375    3383 log.go:172] (0xc0009688f0) (0xc0005a54a0) Stream removed, broadcasting: 1\nI0701 12:39:58.936394    3383 log.go:172] (0xc0009688f0) (0xc000ac6000) Stream removed, broadcasting: 3\nI0701 12:39:58.936404    3383 log.go:172] (0xc0009688f0) (0xc000ac60a0) Stream removed, broadcasting: 5\n"
Jul  1 12:39:58.942: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  1 12:39:58.942: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  1 12:39:58.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  1 12:39:59.213: INFO: stderr: "I0701 12:39:59.086474    3406 log.go:172] (0xc000aa2790) (0xc000ad6320) Create stream\nI0701 12:39:59.086527    3406 log.go:172] (0xc000aa2790) (0xc000ad6320) Stream added, broadcasting: 1\nI0701 12:39:59.089590    3406 log.go:172] (0xc000aa2790) Reply frame received for 1\nI0701 12:39:59.089639    3406 log.go:172] (0xc000aa2790) (0xc000621180) Create stream\nI0701 12:39:59.089654    3406 log.go:172] (0xc000aa2790) (0xc000621180) Stream added, broadcasting: 3\nI0701 12:39:59.090714    3406 log.go:172] (0xc000aa2790) Reply frame received for 3\nI0701 12:39:59.090737    3406 log.go:172] (0xc000aa2790) (0xc000ad63c0) Create stream\nI0701 12:39:59.090744    3406 log.go:172] (0xc000aa2790) (0xc000ad63c0) Stream added, broadcasting: 5\nI0701 12:39:59.091885    3406 log.go:172] (0xc000aa2790) Reply frame received for 5\nI0701 12:39:59.160232    3406 log.go:172] (0xc000aa2790) Data frame received for 5\nI0701 12:39:59.160259    3406 log.go:172] (0xc000ad63c0) (5) Data frame handling\nI0701 12:39:59.160280    3406 log.go:172] (0xc000ad63c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 12:39:59.203345    3406 log.go:172] (0xc000aa2790) Data frame received for 3\nI0701 12:39:59.203375    3406 log.go:172] (0xc000621180) (3) Data frame handling\nI0701 12:39:59.203395    3406 log.go:172] (0xc000621180) (3) Data frame sent\nI0701 12:39:59.203696    3406 log.go:172] (0xc000aa2790) Data frame received for 5\nI0701 12:39:59.203724    3406 log.go:172] (0xc000ad63c0) (5) Data frame handling\nI0701 12:39:59.203799    3406 log.go:172] (0xc000aa2790) Data frame received for 3\nI0701 12:39:59.203815    3406 log.go:172] (0xc000621180) (3) Data frame handling\nI0701 12:39:59.206150    3406 log.go:172] (0xc000aa2790) Data frame received for 1\nI0701 12:39:59.206176    3406 log.go:172] (0xc000ad6320) (1) Data frame handling\nI0701 12:39:59.206191    3406 log.go:172] (0xc000ad6320) (1) Data frame sent\nI0701 12:39:59.206214    3406 log.go:172] (0xc000aa2790) (0xc000ad6320) Stream removed, broadcasting: 1\nI0701 12:39:59.206243    3406 log.go:172] (0xc000aa2790) Go away received\nI0701 12:39:59.206702    3406 log.go:172] (0xc000aa2790) (0xc000ad6320) Stream removed, broadcasting: 1\nI0701 12:39:59.206721    3406 log.go:172] (0xc000aa2790) (0xc000621180) Stream removed, broadcasting: 3\nI0701 12:39:59.206735    3406 log.go:172] (0xc000aa2790) (0xc000ad63c0) Stream removed, broadcasting: 5\n"
Jul  1 12:39:59.213: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  1 12:39:59.213: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  1 12:39:59.213: INFO: Waiting for statefulset status.replicas updated to 0
Jul  1 12:39:59.216: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Jul  1 12:40:09.222: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  1 12:40:09.222: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul  1 12:40:09.222: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul  1 12:40:09.232: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul  1 12:40:09.232: INFO: ss-0  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  }]
Jul  1 12:40:09.232: INFO: ss-1  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:35 +0000 UTC  }]
Jul  1 12:40:09.232: INFO: ss-2  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:35 +0000 UTC  }]
Jul  1 12:40:09.232: INFO: 
Jul  1 12:40:09.232: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  1 12:40:10.375: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul  1 12:40:10.375: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  }]
Jul  1 12:40:10.375: INFO: ss-1  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:35 +0000 UTC  }]
Jul  1 12:40:10.375: INFO: ss-2  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:35 +0000 UTC  }]
Jul  1 12:40:10.375: INFO: 
Jul  1 12:40:10.375: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  1 12:40:11.380: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul  1 12:40:11.380: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  }]
Jul  1 12:40:11.380: INFO: ss-1  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:35 +0000 UTC  }]
Jul  1 12:40:11.381: INFO: ss-2  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:35 +0000 UTC  }]
Jul  1 12:40:11.381: INFO: 
Jul  1 12:40:11.381: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  1 12:40:12.422: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul  1 12:40:12.422: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  }]
Jul  1 12:40:12.422: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:35 +0000 UTC  }]
Jul  1 12:40:12.422: INFO: ss-2  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:35 +0000 UTC  }]
Jul  1 12:40:12.422: INFO: 
Jul  1 12:40:12.422: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  1 12:40:13.427: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul  1 12:40:13.427: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  }]
Jul  1 12:40:13.427: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:35 +0000 UTC  }]
Jul  1 12:40:13.427: INFO: ss-2  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:35 +0000 UTC  }]
Jul  1 12:40:13.427: INFO: 
Jul  1 12:40:13.427: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  1 12:40:14.461: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul  1 12:40:14.461: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  }]
Jul  1 12:40:14.461: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:35 +0000 UTC  }]
Jul  1 12:40:14.461: INFO: 
Jul  1 12:40:14.461: INFO: StatefulSet ss has not reached scale 0, at 2
Jul  1 12:40:15.466: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul  1 12:40:15.466: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  }]
Jul  1 12:40:15.466: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:35 +0000 UTC  }]
Jul  1 12:40:15.466: INFO: 
Jul  1 12:40:15.466: INFO: StatefulSet ss has not reached scale 0, at 2
Jul  1 12:40:16.471: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul  1 12:40:16.471: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  }]
Jul  1 12:40:16.471: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:35 +0000 UTC  }]
Jul  1 12:40:16.471: INFO: 
Jul  1 12:40:16.471: INFO: StatefulSet ss has not reached scale 0, at 2
Jul  1 12:40:17.475: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul  1 12:40:17.475: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  }]
Jul  1 12:40:17.475: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:35 +0000 UTC  }]
Jul  1 12:40:17.475: INFO: 
Jul  1 12:40:17.475: INFO: StatefulSet ss has not reached scale 0, at 2
Jul  1 12:40:18.480: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul  1 12:40:18.480: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:07 +0000 UTC  }]
Jul  1 12:40:18.480: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 12:39:35 +0000 UTC  }]
Jul  1 12:40:18.480: INFO: 
Jul  1 12:40:18.480: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9915
Jul  1 12:40:19.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:40:19.620: INFO: rc: 1
Jul  1 12:40:19.620: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul  1 12:40:29.621: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:40:29.719: INFO: rc: 1
Jul  1 12:40:29.719: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:40:39.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:40:39.817: INFO: rc: 1
Jul  1 12:40:39.817: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:40:49.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:40:49.926: INFO: rc: 1
Jul  1 12:40:49.926: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:40:59.926: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:41:00.034: INFO: rc: 1
Jul  1 12:41:00.034: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:41:10.034: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:41:10.477: INFO: rc: 1
Jul  1 12:41:10.477: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:41:20.478: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:41:20.571: INFO: rc: 1
Jul  1 12:41:20.571: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:41:30.571: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:41:30.679: INFO: rc: 1
Jul  1 12:41:30.679: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:41:40.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:41:40.794: INFO: rc: 1
Jul  1 12:41:40.794: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:41:50.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:41:50.895: INFO: rc: 1
Jul  1 12:41:50.895: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:42:00.896: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:42:00.997: INFO: rc: 1
Jul  1 12:42:00.997: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:42:10.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:42:11.114: INFO: rc: 1
Jul  1 12:42:11.114: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:42:21.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:42:21.234: INFO: rc: 1
Jul  1 12:42:21.234: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:42:31.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:42:31.331: INFO: rc: 1
Jul  1 12:42:31.331: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:42:41.331: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:42:41.430: INFO: rc: 1
Jul  1 12:42:41.430: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:42:51.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:42:51.520: INFO: rc: 1
Jul  1 12:42:51.521: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:43:01.521: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:43:01.625: INFO: rc: 1
Jul  1 12:43:01.625: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:43:11.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:43:11.723: INFO: rc: 1
Jul  1 12:43:11.723: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:43:21.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:43:21.864: INFO: rc: 1
Jul  1 12:43:21.864: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:43:31.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:43:31.968: INFO: rc: 1
Jul  1 12:43:31.968: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:43:41.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:43:42.079: INFO: rc: 1
Jul  1 12:43:42.079: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:43:52.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:43:52.191: INFO: rc: 1
Jul  1 12:43:52.191: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:44:02.192: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:44:02.297: INFO: rc: 1
Jul  1 12:44:02.297: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:44:12.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:44:12.415: INFO: rc: 1
Jul  1 12:44:12.415: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:44:22.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:44:22.529: INFO: rc: 1
Jul  1 12:44:22.529: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:44:32.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:44:32.658: INFO: rc: 1
Jul  1 12:44:32.658: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:44:42.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:44:42.759: INFO: rc: 1
Jul  1 12:44:42.759: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:44:52.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:44:52.853: INFO: rc: 1
Jul  1 12:44:52.853: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:45:02.853: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:45:02.962: INFO: rc: 1
Jul  1 12:45:02.962: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:45:12.962: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:45:13.055: INFO: rc: 1
Jul  1 12:45:13.055: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  1 12:45:23.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9915 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  1 12:45:23.157: INFO: rc: 1
Jul  1 12:45:23.157: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Jul  1 12:45:23.157: INFO: Scaling statefulset ss to 0
Jul  1 12:45:23.175: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul  1 12:45:23.178: INFO: Deleting all statefulset in ns statefulset-9915
Jul  1 12:45:23.180: INFO: Scaling statefulset ss to 0
Jul  1 12:45:23.189: INFO: Waiting for statefulset status.replicas updated to 0
Jul  1 12:45:23.191: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:45:23.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9915" for this suite.

• [SLOW TEST:376.392 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":265,"skipped":4544,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:45:23.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 12:45:23.308: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-e80b9100-fb55-48c7-bdd6-b111a1db0af0" in namespace "security-context-test-4750" to be "Succeeded or Failed"
Jul  1 12:45:23.334: INFO: Pod "busybox-privileged-false-e80b9100-fb55-48c7-bdd6-b111a1db0af0": Phase="Pending", Reason="", readiness=false. Elapsed: 25.656861ms
Jul  1 12:45:25.344: INFO: Pod "busybox-privileged-false-e80b9100-fb55-48c7-bdd6-b111a1db0af0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035758111s
Jul  1 12:45:27.347: INFO: Pod "busybox-privileged-false-e80b9100-fb55-48c7-bdd6-b111a1db0af0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039254046s
Jul  1 12:45:27.347: INFO: Pod "busybox-privileged-false-e80b9100-fb55-48c7-bdd6-b111a1db0af0" satisfied condition "Succeeded or Failed"
Jul  1 12:45:27.365: INFO: Got logs for pod "busybox-privileged-false-e80b9100-fb55-48c7-bdd6-b111a1db0af0": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:45:27.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4750" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":266,"skipped":4548,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:45:27.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-3516c91f-fbdf-4a50-9437-e9d7b76516bc
STEP: Creating a pod to test consume secrets
Jul  1 12:45:27.512: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-17cc97a4-a0ae-45b9-8926-c2dc9889c2d3" in namespace "projected-1919" to be "Succeeded or Failed"
Jul  1 12:45:27.535: INFO: Pod "pod-projected-secrets-17cc97a4-a0ae-45b9-8926-c2dc9889c2d3": Phase="Pending", Reason="", readiness=false. Elapsed: 23.157117ms
Jul  1 12:45:29.630: INFO: Pod "pod-projected-secrets-17cc97a4-a0ae-45b9-8926-c2dc9889c2d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118837661s
Jul  1 12:45:31.695: INFO: Pod "pod-projected-secrets-17cc97a4-a0ae-45b9-8926-c2dc9889c2d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183005924s
Jul  1 12:45:33.803: INFO: Pod "pod-projected-secrets-17cc97a4-a0ae-45b9-8926-c2dc9889c2d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.291000739s
Jul  1 12:45:35.807: INFO: Pod "pod-projected-secrets-17cc97a4-a0ae-45b9-8926-c2dc9889c2d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.2953735s
STEP: Saw pod success
Jul  1 12:45:35.807: INFO: Pod "pod-projected-secrets-17cc97a4-a0ae-45b9-8926-c2dc9889c2d3" satisfied condition "Succeeded or Failed"
Jul  1 12:45:35.810: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-17cc97a4-a0ae-45b9-8926-c2dc9889c2d3 container projected-secret-volume-test: 
STEP: delete the pod
Jul  1 12:45:35.834: INFO: Waiting for pod pod-projected-secrets-17cc97a4-a0ae-45b9-8926-c2dc9889c2d3 to disappear
Jul  1 12:45:35.839: INFO: Pod pod-projected-secrets-17cc97a4-a0ae-45b9-8926-c2dc9889c2d3 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:45:35.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1919" for this suite.

• [SLOW TEST:8.476 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4561,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:45:35.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jul  1 12:45:35.939: INFO: Created pod &Pod{ObjectMeta:{dns-2057  dns-2057 /api/v1/namespaces/dns-2057/pods/dns-2057 00ae305f-688b-48a7-9826-fecef74a978c 16811327 0 2020-07-01 12:45:35 +0000 UTC   map[] map[] [] []  [{e2e.test Update v1 2020-07-01 12:45:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f2flf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f2flf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f2flf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  1 12:45:35.946: INFO: The status of Pod dns-2057 is Pending, waiting for it to be Running (with Ready = true)
Jul  1 12:45:37.950: INFO: The status of Pod dns-2057 is Pending, waiting for it to be Running (with Ready = true)
Jul  1 12:45:39.951: INFO: The status of Pod dns-2057 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
Jul  1 12:45:39.951: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2057 PodName:dns-2057 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 12:45:39.951: INFO: >>> kubeConfig: /root/.kube/config
I0701 12:45:39.990816       7 log.go:172] (0xc002eca8f0) (0xc0019f2640) Create stream
I0701 12:45:39.990865       7 log.go:172] (0xc002eca8f0) (0xc0019f2640) Stream added, broadcasting: 1
I0701 12:45:39.994303       7 log.go:172] (0xc002eca8f0) Reply frame received for 1
I0701 12:45:39.994389       7 log.go:172] (0xc002eca8f0) (0xc0016c1a40) Create stream
I0701 12:45:39.994422       7 log.go:172] (0xc002eca8f0) (0xc0016c1a40) Stream added, broadcasting: 3
I0701 12:45:39.995551       7 log.go:172] (0xc002eca8f0) Reply frame received for 3
I0701 12:45:39.995587       7 log.go:172] (0xc002eca8f0) (0xc0016c1e00) Create stream
I0701 12:45:39.995601       7 log.go:172] (0xc002eca8f0) (0xc0016c1e00) Stream added, broadcasting: 5
I0701 12:45:39.996993       7 log.go:172] (0xc002eca8f0) Reply frame received for 5
I0701 12:45:40.111678       7 log.go:172] (0xc002eca8f0) Data frame received for 3
I0701 12:45:40.111706       7 log.go:172] (0xc0016c1a40) (3) Data frame handling
I0701 12:45:40.111717       7 log.go:172] (0xc0016c1a40) (3) Data frame sent
I0701 12:45:40.113751       7 log.go:172] (0xc002eca8f0) Data frame received for 3
I0701 12:45:40.113763       7 log.go:172] (0xc0016c1a40) (3) Data frame handling
I0701 12:45:40.114074       7 log.go:172] (0xc002eca8f0) Data frame received for 5
I0701 12:45:40.114099       7 log.go:172] (0xc0016c1e00) (5) Data frame handling
I0701 12:45:40.115593       7 log.go:172] (0xc002eca8f0) Data frame received for 1
I0701 12:45:40.115607       7 log.go:172] (0xc0019f2640) (1) Data frame handling
I0701 12:45:40.115614       7 log.go:172] (0xc0019f2640) (1) Data frame sent
I0701 12:45:40.115625       7 log.go:172] (0xc002eca8f0) (0xc0019f2640) Stream removed, broadcasting: 1
I0701 12:45:40.115717       7 log.go:172] (0xc002eca8f0) Go away received
I0701 12:45:40.115749       7 log.go:172] (0xc002eca8f0) (0xc0019f2640) Stream removed, broadcasting: 1
I0701 12:45:40.115774       7 log.go:172] (0xc002eca8f0) (0xc0016c1a40) Stream removed, broadcasting: 3
I0701 12:45:40.115783       7 log.go:172] (0xc002eca8f0) (0xc0016c1e00) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Jul  1 12:45:40.115: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2057 PodName:dns-2057 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 12:45:40.115: INFO: >>> kubeConfig: /root/.kube/config
I0701 12:45:40.146395       7 log.go:172] (0xc002caab00) (0xc0010e6280) Create stream
I0701 12:45:40.146447       7 log.go:172] (0xc002caab00) (0xc0010e6280) Stream added, broadcasting: 1
I0701 12:45:40.149017       7 log.go:172] (0xc002caab00) Reply frame received for 1
I0701 12:45:40.149046       7 log.go:172] (0xc002caab00) (0xc0010e63c0) Create stream
I0701 12:45:40.149056       7 log.go:172] (0xc002caab00) (0xc0010e63c0) Stream added, broadcasting: 3
I0701 12:45:40.150428       7 log.go:172] (0xc002caab00) Reply frame received for 3
I0701 12:45:40.150470       7 log.go:172] (0xc002caab00) (0xc002120000) Create stream
I0701 12:45:40.150487       7 log.go:172] (0xc002caab00) (0xc002120000) Stream added, broadcasting: 5
I0701 12:45:40.151412       7 log.go:172] (0xc002caab00) Reply frame received for 5
I0701 12:45:40.215875       7 log.go:172] (0xc002caab00) Data frame received for 3
I0701 12:45:40.215919       7 log.go:172] (0xc0010e63c0) (3) Data frame handling
I0701 12:45:40.215946       7 log.go:172] (0xc0010e63c0) (3) Data frame sent
I0701 12:45:40.217785       7 log.go:172] (0xc002caab00) Data frame received for 5
I0701 12:45:40.217808       7 log.go:172] (0xc002120000) (5) Data frame handling
I0701 12:45:40.217837       7 log.go:172] (0xc002caab00) Data frame received for 3
I0701 12:45:40.217873       7 log.go:172] (0xc0010e63c0) (3) Data frame handling
I0701 12:45:40.219337       7 log.go:172] (0xc002caab00) Data frame received for 1
I0701 12:45:40.219367       7 log.go:172] (0xc0010e6280) (1) Data frame handling
I0701 12:45:40.219382       7 log.go:172] (0xc0010e6280) (1) Data frame sent
I0701 12:45:40.219399       7 log.go:172] (0xc002caab00) (0xc0010e6280) Stream removed, broadcasting: 1
I0701 12:45:40.219416       7 log.go:172] (0xc002caab00) Go away received
I0701 12:45:40.219639       7 log.go:172] (0xc002caab00) (0xc0010e6280) Stream removed, broadcasting: 1
I0701 12:45:40.219665       7 log.go:172] (0xc002caab00) (0xc0010e63c0) Stream removed, broadcasting: 3
I0701 12:45:40.219692       7 log.go:172] (0xc002caab00) (0xc002120000) Stream removed, broadcasting: 5
Jul  1 12:45:40.219: INFO: Deleting pod dns-2057...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:45:40.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2057" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":268,"skipped":4577,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:45:40.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Jul  1 12:45:40.927: INFO: PodSpec: initContainers in spec.initContainers
Jul  1 12:46:31.481: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f099c1be-097c-4100-9786-9f5570b3b331", GenerateName:"", Namespace:"init-container-2125", SelfLink:"/api/v1/namespaces/init-container-2125/pods/pod-init-f099c1be-097c-4100-9786-9f5570b3b331", UID:"439c3494-0b3b-46dd-8e9c-e67312f9fcb8", ResourceVersion:"16811544", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729204340, loc:(*time.Location)(0x7b200c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"926990262"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0029ca340), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0029ca360)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0029ca380), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0029ca3a0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-t755f", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc006467340), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-t755f", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-t755f", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-t755f", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0054e0348), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001709e30), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0054e03d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0054e03f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0054e03f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0054e03fc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204341, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204341, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204341, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204340, loc:(*time.Location)(0x7b200c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.15", PodIP:"10.244.2.13", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.13"}}, StartTime:(*v1.Time)(0xc0029ca3c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001709f80)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00112c000)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://356fcae327dc586e8d667797d344b7ce8340346bf83602fccc394a874330a1ab", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0029ca420), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0029ca400), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0054e047f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:46:31.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2125" for this suite.

• [SLOW TEST:51.209 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":269,"skipped":4589,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
S
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:46:31.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jul  1 12:46:31.833: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:46:53.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2771" for this suite.

• [SLOW TEST:22.331 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4590,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:46:53.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-cf9b60f2-45e0-46d6-8e77-f7083ad1583b
STEP: Creating a pod to test consume configMaps
Jul  1 12:46:54.140: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0caabaeb-6a0b-45a3-a6e1-9c4b56e0dfb3" in namespace "projected-9603" to be "Succeeded or Failed"
Jul  1 12:46:54.153: INFO: Pod "pod-projected-configmaps-0caabaeb-6a0b-45a3-a6e1-9c4b56e0dfb3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.977452ms
Jul  1 12:46:56.159: INFO: Pod "pod-projected-configmaps-0caabaeb-6a0b-45a3-a6e1-9c4b56e0dfb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019158587s
Jul  1 12:46:58.163: INFO: Pod "pod-projected-configmaps-0caabaeb-6a0b-45a3-a6e1-9c4b56e0dfb3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023424372s
Jul  1 12:47:00.171: INFO: Pod "pod-projected-configmaps-0caabaeb-6a0b-45a3-a6e1-9c4b56e0dfb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030661998s
STEP: Saw pod success
Jul  1 12:47:00.171: INFO: Pod "pod-projected-configmaps-0caabaeb-6a0b-45a3-a6e1-9c4b56e0dfb3" satisfied condition "Succeeded or Failed"
Jul  1 12:47:00.173: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-0caabaeb-6a0b-45a3-a6e1-9c4b56e0dfb3 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  1 12:47:00.236: INFO: Waiting for pod pod-projected-configmaps-0caabaeb-6a0b-45a3-a6e1-9c4b56e0dfb3 to disappear
Jul  1 12:47:00.242: INFO: Pod pod-projected-configmaps-0caabaeb-6a0b-45a3-a6e1-9c4b56e0dfb3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:47:00.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9603" for this suite.

• [SLOW TEST:6.338 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":271,"skipped":4631,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:47:00.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Jul  1 12:47:00.348: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:47:09.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9974" for this suite.

• [SLOW TEST:9.409 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":272,"skipped":4645,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:47:09.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-3a352727-60ea-4af7-a509-b44216cd4e5f
STEP: Creating a pod to test consume secrets
Jul  1 12:47:10.342: INFO: Waiting up to 5m0s for pod "pod-secrets-bc67647a-9f75-4c75-80ea-81518a7fef72" in namespace "secrets-1509" to be "Succeeded or Failed"
Jul  1 12:47:10.375: INFO: Pod "pod-secrets-bc67647a-9f75-4c75-80ea-81518a7fef72": Phase="Pending", Reason="", readiness=false. Elapsed: 32.924741ms
Jul  1 12:47:12.415: INFO: Pod "pod-secrets-bc67647a-9f75-4c75-80ea-81518a7fef72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073064194s
Jul  1 12:47:14.422: INFO: Pod "pod-secrets-bc67647a-9f75-4c75-80ea-81518a7fef72": Phase="Running", Reason="", readiness=true. Elapsed: 4.0804618s
Jul  1 12:47:16.426: INFO: Pod "pod-secrets-bc67647a-9f75-4c75-80ea-81518a7fef72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.084400222s
STEP: Saw pod success
Jul  1 12:47:16.426: INFO: Pod "pod-secrets-bc67647a-9f75-4c75-80ea-81518a7fef72" satisfied condition "Succeeded or Failed"
Jul  1 12:47:16.428: INFO: Trying to get logs from node kali-worker pod pod-secrets-bc67647a-9f75-4c75-80ea-81518a7fef72 container secret-volume-test: 
STEP: delete the pod
Jul  1 12:47:16.506: INFO: Waiting for pod pod-secrets-bc67647a-9f75-4c75-80ea-81518a7fef72 to disappear
Jul  1 12:47:16.543: INFO: Pod pod-secrets-bc67647a-9f75-4c75-80ea-81518a7fef72 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:47:16.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1509" for this suite.

• [SLOW TEST:6.893 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":273,"skipped":4705,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul  1 12:47:16.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul  1 12:47:16.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-7217
I0701 12:47:16.670784       7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7217, replica count: 1
I0701 12:47:17.721495       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0701 12:47:18.721763       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0701 12:47:19.722002       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0701 12:47:20.722218       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0701 12:47:21.722439       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0701 12:47:22.722664       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  1 12:47:23.008: INFO: Created: latency-svc-h57gv
Jul  1 12:47:23.047: INFO: Got endpoints: latency-svc-h57gv [225.092496ms]
Jul  1 12:47:24.121: INFO: Created: latency-svc-ccz44
Jul  1 12:47:24.154: INFO: Got endpoints: latency-svc-ccz44 [1.106344969s]
Jul  1 12:47:24.355: INFO: Created: latency-svc-fqflt
Jul  1 12:47:24.399: INFO: Got endpoints: latency-svc-fqflt [1.351175525s]
Jul  1 12:47:24.517: INFO: Created: latency-svc-rwq87
Jul  1 12:47:24.563: INFO: Got endpoints: latency-svc-rwq87 [1.515689894s]
Jul  1 12:47:24.703: INFO: Created: latency-svc-x7z2k
Jul  1 12:47:24.707: INFO: Got endpoints: latency-svc-x7z2k [1.659122084s]
Jul  1 12:47:24.918: INFO: Created: latency-svc-54z2z
Jul  1 12:47:24.947: INFO: Got endpoints: latency-svc-54z2z [1.899945632s]
Jul  1 12:47:24.977: INFO: Created: latency-svc-bjgr2
Jul  1 12:47:25.068: INFO: Got endpoints: latency-svc-bjgr2 [2.020135974s]
Jul  1 12:47:25.101: INFO: Created: latency-svc-rkbth
Jul  1 12:47:25.154: INFO: Got endpoints: latency-svc-rkbth [2.106401138s]
Jul  1 12:47:25.244: INFO: Created: latency-svc-c4khd
Jul  1 12:47:25.276: INFO: Got endpoints: latency-svc-c4khd [2.227996291s]
Jul  1 12:47:25.318: INFO: Created: latency-svc-s8gvq
Jul  1 12:47:25.385: INFO: Got endpoints: latency-svc-s8gvq [2.337480725s]
Jul  1 12:47:25.439: INFO: Created: latency-svc-qn4q2
Jul  1 12:47:25.459: INFO: Got endpoints: latency-svc-qn4q2 [2.410999904s]
Jul  1 12:47:25.583: INFO: Created: latency-svc-2df5m
Jul  1 12:47:25.633: INFO: Got endpoints: latency-svc-2df5m [2.584970025s]
Jul  1 12:47:25.746: INFO: Created: latency-svc-swmw4
Jul  1 12:47:25.791: INFO: Got endpoints: latency-svc-swmw4 [2.743337788s]
Jul  1 12:47:25.792: INFO: Created: latency-svc-h7r5t
Jul  1 12:47:25.838: INFO: Got endpoints: latency-svc-h7r5t [2.79062377s]
Jul  1 12:47:25.948: INFO: Created: latency-svc-8wdln
Jul  1 12:47:25.969: INFO: Got endpoints: latency-svc-8wdln [2.92115262s]
Jul  1 12:47:26.010: INFO: Created: latency-svc-6sklq
Jul  1 12:47:26.023: INFO: Got endpoints: latency-svc-6sklq [2.975073128s]
Jul  1 12:47:26.147: INFO: Created: latency-svc-sngnh
Jul  1 12:47:26.149: INFO: Got endpoints: latency-svc-sngnh [1.994446324s]
Jul  1 12:47:26.339: INFO: Created: latency-svc-b9cvm
Jul  1 12:47:26.399: INFO: Got endpoints: latency-svc-b9cvm [1.999780874s]
Jul  1 12:47:26.591: INFO: Created: latency-svc-v5jhz
Jul  1 12:47:26.611: INFO: Got endpoints: latency-svc-v5jhz [2.047935685s]
Jul  1 12:47:26.733: INFO: Created: latency-svc-p889l
Jul  1 12:47:26.780: INFO: Got endpoints: latency-svc-p889l [2.073158199s]
Jul  1 12:47:26.900: INFO: Created: latency-svc-4jwfb
Jul  1 12:47:26.929: INFO: Got endpoints: latency-svc-4jwfb [1.98176839s]
Jul  1 12:47:26.970: INFO: Created: latency-svc-wqf9f
Jul  1 12:47:27.044: INFO: Got endpoints: latency-svc-wqf9f [1.976416952s]
Jul  1 12:47:27.084: INFO: Created: latency-svc-mfg49
Jul  1 12:47:27.099: INFO: Got endpoints: latency-svc-mfg49 [1.944397558s]
Jul  1 12:47:27.672: INFO: Created: latency-svc-vmk9n
Jul  1 12:47:29.379: INFO: Got endpoints: latency-svc-vmk9n [4.103168309s]
Jul  1 12:47:29.443: INFO: Created: latency-svc-qlzvj
Jul  1 12:47:29.980: INFO: Got endpoints: latency-svc-qlzvj [4.594666131s]
Jul  1 12:47:30.252: INFO: Created: latency-svc-9l4wh
Jul  1 12:47:30.287: INFO: Got endpoints: latency-svc-9l4wh [4.828260375s]
Jul  1 12:47:30.632: INFO: Created: latency-svc-6vhmm
Jul  1 12:47:30.638: INFO: Got endpoints: latency-svc-6vhmm [5.005587246s]
Jul  1 12:47:31.284: INFO: Created: latency-svc-jjjx6
Jul  1 12:47:31.288: INFO: Got endpoints: latency-svc-jjjx6 [5.496577997s]
Jul  1 12:47:31.558: INFO: Created: latency-svc-bbs8p
Jul  1 12:47:31.840: INFO: Got endpoints: latency-svc-bbs8p [6.001865442s]
Jul  1 12:47:31.886: INFO: Created: latency-svc-x4lrq
Jul  1 12:47:32.403: INFO: Got endpoints: latency-svc-x4lrq [6.434303163s]
Jul  1 12:47:32.493: INFO: Created: latency-svc-tntlp
Jul  1 12:47:32.543: INFO: Got endpoints: latency-svc-tntlp [6.520517497s]
Jul  1 12:47:32.683: INFO: Created: latency-svc-pgznt
Jul  1 12:47:32.692: INFO: Got endpoints: latency-svc-pgznt [6.543507103s]
Jul  1 12:47:32.730: INFO: Created: latency-svc-bzqd2
Jul  1 12:47:32.759: INFO: Got endpoints: latency-svc-bzqd2 [6.360536458s]
Jul  1 12:47:32.824: INFO: Created: latency-svc-45v4z
Jul  1 12:47:32.860: INFO: Got endpoints: latency-svc-45v4z [6.248753918s]
Jul  1 12:47:32.931: INFO: Created: latency-svc-jt9r9
Jul  1 12:47:32.949: INFO: Got endpoints: latency-svc-jt9r9 [6.169175611s]
Jul  1 12:47:32.974: INFO: Created: latency-svc-sl8fp
Jul  1 12:47:32.993: INFO: Got endpoints: latency-svc-sl8fp [6.063736682s]
Jul  1 12:47:33.016: INFO: Created: latency-svc-c2vr2
Jul  1 12:47:33.028: INFO: Got endpoints: latency-svc-c2vr2 [5.983458287s]
Jul  1 12:47:33.080: INFO: Created: latency-svc-v2rgx
Jul  1 12:47:33.101: INFO: Got endpoints: latency-svc-v2rgx [6.001940148s]
Jul  1 12:47:33.137: INFO: Created: latency-svc-7fpdb
Jul  1 12:47:33.155: INFO: Got endpoints: latency-svc-7fpdb [3.775584649s]
Jul  1 12:47:33.224: INFO: Created: latency-svc-wpgtf
Jul  1 12:47:33.227: INFO: Got endpoints: latency-svc-wpgtf [3.247279543s]
Jul  1 12:47:33.269: INFO: Created: latency-svc-9hhl5
Jul  1 12:47:33.290: INFO: Got endpoints: latency-svc-9hhl5 [3.003490642s]
Jul  1 12:47:33.373: INFO: Created: latency-svc-8b87l
Jul  1 12:47:33.379: INFO: Got endpoints: latency-svc-8b87l [2.740440373s]
Jul  1 12:47:33.407: INFO: Created: latency-svc-dnfbk
Jul  1 12:47:33.436: INFO: Got endpoints: latency-svc-dnfbk [2.148009842s]
Jul  1 12:47:33.465: INFO: Created: latency-svc-vkksh
Jul  1 12:47:33.529: INFO: Got endpoints: latency-svc-vkksh [1.68895212s]
Jul  1 12:47:33.545: INFO: Created: latency-svc-n7jdr
Jul  1 12:47:33.610: INFO: Got endpoints: latency-svc-n7jdr [1.206410519s]
Jul  1 12:47:33.673: INFO: Created: latency-svc-54zmt
Jul  1 12:47:33.676: INFO: Got endpoints: latency-svc-54zmt [1.132637908s]
Jul  1 12:47:33.706: INFO: Created: latency-svc-pfp8v
Jul  1 12:47:33.755: INFO: Got endpoints: latency-svc-pfp8v [1.062589697s]
Jul  1 12:47:33.810: INFO: Created: latency-svc-bdmws
Jul  1 12:47:33.814: INFO: Got endpoints: latency-svc-bdmws [1.054383634s]
Jul  1 12:47:33.885: INFO: Created: latency-svc-mb2nh
Jul  1 12:47:33.948: INFO: Got endpoints: latency-svc-mb2nh [1.08767055s]
Jul  1 12:47:33.965: INFO: Created: latency-svc-pxzjj
Jul  1 12:47:33.983: INFO: Got endpoints: latency-svc-pxzjj [1.033599547s]
Jul  1 12:47:34.007: INFO: Created: latency-svc-p77dw
Jul  1 12:47:34.104: INFO: Got endpoints: latency-svc-p77dw [1.110607679s]
Jul  1 12:47:34.119: INFO: Created: latency-svc-zsk8b
Jul  1 12:47:34.164: INFO: Got endpoints: latency-svc-zsk8b [1.13610143s]
Jul  1 12:47:34.308: INFO: Created: latency-svc-6nq9n
Jul  1 12:47:34.319: INFO: Got endpoints: latency-svc-6nq9n [1.218655019s]
Jul  1 12:47:34.541: INFO: Created: latency-svc-9dj4g
Jul  1 12:47:35.051: INFO: Got endpoints: latency-svc-9dj4g [1.896509217s]
Jul  1 12:47:36.058: INFO: Created: latency-svc-d7zwv
Jul  1 12:47:36.081: INFO: Got endpoints: latency-svc-d7zwv [2.854225111s]
Jul  1 12:47:36.231: INFO: Created: latency-svc-q2gsv
Jul  1 12:47:36.250: INFO: Got endpoints: latency-svc-q2gsv [2.959681908s]
Jul  1 12:47:36.501: INFO: Created: latency-svc-rprk4
Jul  1 12:47:36.538: INFO: Got endpoints: latency-svc-rprk4 [3.159204632s]
Jul  1 12:47:36.716: INFO: Created: latency-svc-r64lk
Jul  1 12:47:36.723: INFO: Got endpoints: latency-svc-r64lk [3.287752753s]
Jul  1 12:47:36.796: INFO: Created: latency-svc-h9rxs
Jul  1 12:47:36.829: INFO: Got endpoints: latency-svc-h9rxs [3.299451857s]
Jul  1 12:47:36.866: INFO: Created: latency-svc-5tvrf
Jul  1 12:47:36.875: INFO: Got endpoints: latency-svc-5tvrf [3.265151026s]
Jul  1 12:47:36.916: INFO: Created: latency-svc-cv5j4
Jul  1 12:47:36.923: INFO: Got endpoints: latency-svc-cv5j4 [3.24657206s]
Jul  1 12:47:36.952: INFO: Created: latency-svc-9thrl
Jul  1 12:47:36.955: INFO: Got endpoints: latency-svc-9thrl [3.20024019s]
Jul  1 12:47:37.011: INFO: Created: latency-svc-tllnc
Jul  1 12:47:37.062: INFO: Got endpoints: latency-svc-tllnc [3.248641462s]
Jul  1 12:47:37.076: INFO: Created: latency-svc-fjssr
Jul  1 12:47:37.147: INFO: Got endpoints: latency-svc-fjssr [3.198534667s]
Jul  1 12:47:37.212: INFO: Created: latency-svc-52rxp
Jul  1 12:47:37.226: INFO: Got endpoints: latency-svc-52rxp [3.243204411s]
Jul  1 12:47:37.269: INFO: Created: latency-svc-bpkhv
Jul  1 12:47:37.285: INFO: Got endpoints: latency-svc-bpkhv [3.181560408s]
Jul  1 12:47:37.306: INFO: Created: latency-svc-56plw
Jul  1 12:47:37.355: INFO: Got endpoints: latency-svc-56plw [3.191113187s]
Jul  1 12:47:37.378: INFO: Created: latency-svc-2gdb5
Jul  1 12:47:37.392: INFO: Got endpoints: latency-svc-2gdb5 [3.072980397s]
Jul  1 12:47:37.443: INFO: Created: latency-svc-wdh5b
Jul  1 12:47:37.504: INFO: Got endpoints: latency-svc-wdh5b [2.452500687s]
Jul  1 12:47:37.546: INFO: Created: latency-svc-mr6jz
Jul  1 12:47:37.562: INFO: Got endpoints: latency-svc-mr6jz [1.480004643s]
Jul  1 12:47:37.582: INFO: Created: latency-svc-5mzdq
Jul  1 12:47:38.063: INFO: Got endpoints: latency-svc-5mzdq [1.812280399s]
Jul  1 12:47:38.078: INFO: Created: latency-svc-r4mrr
Jul  1 12:47:38.096: INFO: Got endpoints: latency-svc-r4mrr [1.557360489s]
Jul  1 12:47:38.217: INFO: Created: latency-svc-5h2hq
Jul  1 12:47:38.227: INFO: Got endpoints: latency-svc-5h2hq [1.503095288s]
Jul  1 12:47:38.276: INFO: Created: latency-svc-5tcck
Jul  1 12:47:38.294: INFO: Got endpoints: latency-svc-5tcck [1.464999666s]
Jul  1 12:47:38.317: INFO: Created: latency-svc-hjv5d
Jul  1 12:47:38.367: INFO: Got endpoints: latency-svc-hjv5d [1.492152532s]
Jul  1 12:47:38.421: INFO: Created: latency-svc-jzjcp
Jul  1 12:47:38.438: INFO: Got endpoints: latency-svc-jzjcp [1.515629134s]
Jul  1 12:47:38.499: INFO: Created: latency-svc-gmpp7
Jul  1 12:47:38.515: INFO: Got endpoints: latency-svc-gmpp7 [1.560033005s]
Jul  1 12:47:38.558: INFO: Created: latency-svc-zj4nz
Jul  1 12:47:38.583: INFO: Got endpoints: latency-svc-zj4nz [1.520473627s]
Jul  1 12:47:38.631: INFO: Created: latency-svc-4b88h
Jul  1 12:47:38.638: INFO: Got endpoints: latency-svc-4b88h [1.491129097s]
Jul  1 12:47:38.661: INFO: Created: latency-svc-2tnr2
Jul  1 12:47:38.680: INFO: Got endpoints: latency-svc-2tnr2 [1.453966936s]
Jul  1 12:47:38.720: INFO: Created: latency-svc-nsf57
Jul  1 12:47:38.787: INFO: Got endpoints: latency-svc-nsf57 [1.501267991s]
Jul  1 12:47:38.822: INFO: Created: latency-svc-llhl9
Jul  1 12:47:38.843: INFO: Got endpoints: latency-svc-llhl9 [1.487780537s]
Jul  1 12:47:38.865: INFO: Created: latency-svc-n2hp9
Jul  1 12:47:38.879: INFO: Got endpoints: latency-svc-n2hp9 [1.486891809s]
Jul  1 12:47:38.930: INFO: Created: latency-svc-l42l7
Jul  1 12:47:38.943: INFO: Got endpoints: latency-svc-l42l7 [1.439148737s]
Jul  1 12:47:38.977: INFO: Created: latency-svc-kg94z
Jul  1 12:47:38.994: INFO: Got endpoints: latency-svc-kg94z [1.432756356s]
Jul  1 12:47:39.019: INFO: Created: latency-svc-mtdl6
Jul  1 12:47:39.068: INFO: Got endpoints: latency-svc-mtdl6 [1.005148565s]
Jul  1 12:47:39.085: INFO: Created: latency-svc-vfqz4
Jul  1 12:47:39.105: INFO: Got endpoints: latency-svc-vfqz4 [1.008990344s]
Jul  1 12:47:39.143: INFO: Created: latency-svc-7dtp7
Jul  1 12:47:39.212: INFO: Got endpoints: latency-svc-7dtp7 [985.498523ms]
Jul  1 12:47:39.229: INFO: Created: latency-svc-99hhq
Jul  1 12:47:39.243: INFO: Got endpoints: latency-svc-99hhq [948.961754ms]
Jul  1 12:47:39.272: INFO: Created: latency-svc-577mc
Jul  1 12:47:39.286: INFO: Got endpoints: latency-svc-577mc [918.510025ms]
Jul  1 12:47:39.307: INFO: Created: latency-svc-gcjd8
Jul  1 12:47:39.379: INFO: Got endpoints: latency-svc-gcjd8 [941.033538ms]
Jul  1 12:47:39.405: INFO: Created: latency-svc-blj22
Jul  1 12:47:39.419: INFO: Got endpoints: latency-svc-blj22 [903.366986ms]
Jul  1 12:47:39.447: INFO: Created: latency-svc-rr86q
Jul  1 12:47:39.461: INFO: Got endpoints: latency-svc-rr86q [878.341629ms]
Jul  1 12:47:39.548: INFO: Created: latency-svc-jltrt
Jul  1 12:47:39.550: INFO: Got endpoints: latency-svc-jltrt [912.563856ms]
Jul  1 12:47:39.596: INFO: Created: latency-svc-8vbw2
Jul  1 12:47:39.626: INFO: Got endpoints: latency-svc-8vbw2 [946.048589ms]
Jul  1 12:47:39.709: INFO: Created: latency-svc-wj5kp
Jul  1 12:47:39.739: INFO: Got endpoints: latency-svc-wj5kp [952.181777ms]
Jul  1 12:47:39.740: INFO: Created: latency-svc-mjfx8
Jul  1 12:47:39.775: INFO: Got endpoints: latency-svc-mjfx8 [932.58457ms]
Jul  1 12:47:39.852: INFO: Created: latency-svc-rgz2h
Jul  1 12:47:39.865: INFO: Got endpoints: latency-svc-rgz2h [985.808057ms]
Jul  1 12:47:39.891: INFO: Created: latency-svc-qcz2n
Jul  1 12:47:39.908: INFO: Got endpoints: latency-svc-qcz2n [965.487883ms]
Jul  1 12:47:39.943: INFO: Created: latency-svc-8nmfs
Jul  1 12:47:39.990: INFO: Got endpoints: latency-svc-8nmfs [996.079548ms]
Jul  1 12:47:40.003: INFO: Created: latency-svc-8tccc
Jul  1 12:47:40.022: INFO: Got endpoints: latency-svc-8tccc [954.60944ms]
Jul  1 12:47:40.045: INFO: Created: latency-svc-6bkhm
Jul  1 12:47:40.059: INFO: Got endpoints: latency-svc-6bkhm [954.290902ms]
Jul  1 12:47:40.146: INFO: Created: latency-svc-blqht
Jul  1 12:47:40.150: INFO: Got endpoints: latency-svc-blqht [937.42953ms]
Jul  1 12:47:40.213: INFO: Created: latency-svc-xd866
Jul  1 12:47:40.228: INFO: Got endpoints: latency-svc-xd866 [984.666094ms]
Jul  1 12:47:40.302: INFO: Created: latency-svc-4zmvz
Jul  1 12:47:40.305: INFO: Got endpoints: latency-svc-4zmvz [1.019357146s]
Jul  1 12:47:40.340: INFO: Created: latency-svc-w98qx
Jul  1 12:47:40.364: INFO: Got endpoints: latency-svc-w98qx [984.977852ms]
Jul  1 12:47:40.399: INFO: Created: latency-svc-b2t6x
Jul  1 12:47:40.457: INFO: Got endpoints: latency-svc-b2t6x [1.038546833s]
Jul  1 12:47:40.471: INFO: Created: latency-svc-w558w
Jul  1 12:47:40.497: INFO: Got endpoints: latency-svc-w558w [1.035922832s]
Jul  1 12:47:40.637: INFO: Created: latency-svc-rcdct
Jul  1 12:47:40.640: INFO: Got endpoints: latency-svc-rcdct [1.090068341s]
Jul  1 12:47:40.676: INFO: Created: latency-svc-g9gkp
Jul  1 12:47:40.689: INFO: Got endpoints: latency-svc-g9gkp [1.063092028s]
Jul  1 12:47:40.706: INFO: Created: latency-svc-mcfbx
Jul  1 12:47:40.731: INFO: Got endpoints: latency-svc-mcfbx [992.441524ms]
Jul  1 12:47:40.807: INFO: Created: latency-svc-sn6zh
Jul  1 12:47:40.822: INFO: Got endpoints: latency-svc-sn6zh [1.046307239s]
Jul  1 12:47:40.875: INFO: Created: latency-svc-sqdd5
Jul  1 12:47:40.943: INFO: Got endpoints: latency-svc-sqdd5 [1.077380428s]
Jul  1 12:47:40.970: INFO: Created: latency-svc-f9p2q
Jul  1 12:47:40.990: INFO: Got endpoints: latency-svc-f9p2q [1.081954296s]
Jul  1 12:47:41.013: INFO: Created: latency-svc-6lcc9
Jul  1 12:47:41.027: INFO: Got endpoints: latency-svc-6lcc9 [1.036044144s]
Jul  1 12:47:41.089: INFO: Created: latency-svc-525vs
Jul  1 12:47:41.091: INFO: Got endpoints: latency-svc-525vs [1.068987771s]
Jul  1 12:47:41.156: INFO: Created: latency-svc-4tz26
Jul  1 12:47:41.248: INFO: Got endpoints: latency-svc-4tz26 [1.188795441s]
Jul  1 12:47:41.250: INFO: Created: latency-svc-525g2
Jul  1 12:47:41.261: INFO: Got endpoints: latency-svc-525g2 [1.110949087s]
Jul  1 12:47:41.281: INFO: Created: latency-svc-trkxg
Jul  1 12:47:41.511: INFO: Got endpoints: latency-svc-trkxg [1.283877663s]
Jul  1 12:47:41.587: INFO: Created: latency-svc-bclrn
Jul  1 12:47:41.603: INFO: Got endpoints: latency-svc-bclrn [1.297893356s]
Jul  1 12:47:41.666: INFO: Created: latency-svc-zqpsb
Jul  1 12:47:41.681: INFO: Got endpoints: latency-svc-zqpsb [1.316882761s]
Jul  1 12:47:41.702: INFO: Created: latency-svc-j4xdw
Jul  1 12:47:41.711: INFO: Got endpoints: latency-svc-j4xdw [1.253940734s]
Jul  1 12:47:41.747: INFO: Created: latency-svc-brtfq
Jul  1 12:47:41.804: INFO: Got endpoints: latency-svc-brtfq [1.307015746s]
Jul  1 12:47:41.820: INFO: Created: latency-svc-bcp4f
Jul  1 12:47:41.862: INFO: Got endpoints: latency-svc-bcp4f [1.221845882s]
Jul  1 12:47:41.894: INFO: Created: latency-svc-skdn8
Jul  1 12:47:41.937: INFO: Got endpoints: latency-svc-skdn8 [1.247619886s]
Jul  1 12:47:41.959: INFO: Created: latency-svc-fffgp
Jul  1 12:47:41.971: INFO: Got endpoints: latency-svc-fffgp [1.239013109s]
Jul  1 12:47:41.989: INFO: Created: latency-svc-9bnhl
Jul  1 12:47:42.001: INFO: Got endpoints: latency-svc-9bnhl [1.179270882s]
Jul  1 12:47:42.018: INFO: Created: latency-svc-74qp5
Jul  1 12:47:42.080: INFO: Got endpoints: latency-svc-74qp5 [1.136965823s]
Jul  1 12:47:42.121: INFO: Created: latency-svc-k929n
Jul  1 12:47:42.133: INFO: Got endpoints: latency-svc-k929n [1.142936721s]
Jul  1 12:47:42.158: INFO: Created: latency-svc-gncdm
Jul  1 12:47:42.169: INFO: Got endpoints: latency-svc-gncdm [1.14246991s]
Jul  1 12:47:42.254: INFO: Created: latency-svc-jsd2m
Jul  1 12:47:42.272: INFO: Got endpoints: latency-svc-jsd2m [1.180224408s]
Jul  1 12:47:42.307: INFO: Created: latency-svc-dvzmf
Jul  1 12:47:42.314: INFO: Got endpoints: latency-svc-dvzmf [1.066275103s]
Jul  1 12:47:42.403: INFO: Created: latency-svc-vxddw
Jul  1 12:47:42.416: INFO: Got endpoints: latency-svc-vxddw [1.15522945s]
Jul  1 12:47:42.450: INFO: Created: latency-svc-zr7dt
Jul  1 12:47:42.465: INFO: Got endpoints: latency-svc-zr7dt [952.941741ms]
Jul  1 12:47:42.499: INFO: Created: latency-svc-ngq66
Jul  1 12:47:42.583: INFO: Got endpoints: latency-svc-ngq66 [979.627558ms]
Jul  1 12:47:42.585: INFO: Created: latency-svc-whtxj
Jul  1 12:47:42.626: INFO: Got endpoints: latency-svc-whtxj [944.461604ms]
Jul  1 12:47:42.650: INFO: Created: latency-svc-5kfkj
Jul  1 12:47:42.663: INFO: Got endpoints: latency-svc-5kfkj [951.708728ms]
Jul  1 12:47:42.749: INFO: Created: latency-svc-rfr94
Jul  1 12:47:42.776: INFO: Got endpoints: latency-svc-rfr94 [971.505402ms]
Jul  1 12:47:42.777: INFO: Created: latency-svc-r42lf
Jul  1 12:47:42.825: INFO: Got endpoints: latency-svc-r42lf [962.583432ms]
Jul  1 12:47:42.900: INFO: Created: latency-svc-x5m7n
Jul  1 12:47:42.926: INFO: Got endpoints: latency-svc-x5m7n [989.014995ms]
Jul  1 12:47:42.970: INFO: Created: latency-svc-9gq6h
Jul  1 12:47:42.982: INFO: Got endpoints: latency-svc-9gq6h [1.01176701s]
Jul  1 12:47:43.038: INFO: Created: latency-svc-fxdhm
Jul  1 12:47:43.068: INFO: Created: latency-svc-rjlqr
Jul  1 12:47:43.068: INFO: Got endpoints: latency-svc-fxdhm [1.06734861s]
Jul  1 12:47:43.093: INFO: Got endpoints: latency-svc-rjlqr [1.01279664s]
Jul  1 12:47:43.126: INFO: Created: latency-svc-s85md
Jul  1 12:47:43.201: INFO: Got endpoints: latency-svc-s85md [1.067292172s]
Jul  1 12:47:43.210: INFO: Created: latency-svc-z697l
Jul  1 12:47:43.223: INFO: Got endpoints: latency-svc-z697l [1.054130721s]
Jul  1 12:47:43.250: INFO: Created: latency-svc-4ssq5
Jul  1 12:47:43.267: INFO: Got endpoints: latency-svc-4ssq5 [994.914157ms]
Jul  1 12:47:43.285: INFO: Created: latency-svc-stlrx
Jul  1 12:47:43.367: INFO: Got endpoints: latency-svc-stlrx [1.052477866s]
Jul  1 12:47:43.401: INFO: Created: latency-svc-nn4v7
Jul  1 12:47:43.417: INFO: Got endpoints: latency-svc-nn4v7 [1.001048419s]
Jul  1 12:47:43.481: INFO: Created: latency-svc-5jgkc
Jul  1 12:47:43.484: INFO: Got endpoints: latency-svc-5jgkc [1.019241003s]
Jul  1 12:47:43.532: INFO: Created: latency-svc-xkwls
Jul  1 12:47:43.639: INFO: Got endpoints: latency-svc-xkwls [1.055864573s]
Jul  1 12:47:43.644: INFO: Created: latency-svc-dbqd8
Jul  1 12:47:43.682: INFO: Got endpoints: latency-svc-dbqd8 [1.055816953s]
Jul  1 12:47:43.705: INFO: Created: latency-svc-r5n7w
Jul  1 12:47:43.717: INFO: Got endpoints: latency-svc-r5n7w [1.054159148s]
Jul  1 12:47:43.735: INFO: Created: latency-svc-fl499
Jul  1 12:47:43.804: INFO: Got endpoints: latency-svc-fl499 [1.02838077s]
Jul  1 12:47:43.809: INFO: Created: latency-svc-8kv6n
Jul  1 12:47:43.838: INFO: Got endpoints: latency-svc-8kv6n [1.013312594s]
Jul  1 12:47:43.863: INFO: Created: latency-svc-zwzdr
Jul  1 12:47:43.885: INFO: Got endpoints: latency-svc-zwzdr [958.834381ms]
Jul  1 12:47:43.984: INFO: Created: latency-svc-h4bvj
Jul  1 12:47:44.061: INFO: Created: latency-svc-7mlv5
Jul  1 12:47:44.061: INFO: Got endpoints: latency-svc-h4bvj [1.079001725s]
Jul  1 12:47:44.392: INFO: Got endpoints: latency-svc-7mlv5 [1.323960483s]
Jul  1 12:47:44.812: INFO: Created: latency-svc-vk4ws
Jul  1 12:47:44.863: INFO: Got endpoints: latency-svc-vk4ws [1.770740031s]
Jul  1 12:47:45.075: INFO: Created: latency-svc-lnltf
Jul  1 12:47:45.110: INFO: Got endpoints: latency-svc-lnltf [1.909108516s]
Jul  1 12:47:45.283: INFO: Created: latency-svc-526bx
Jul  1 12:47:45.307: INFO: Got endpoints: latency-svc-526bx [2.083342406s]
Jul  1 12:47:45.542: INFO: Created: latency-svc-4qmkp
Jul  1 12:47:45.577: INFO: Got endpoints: latency-svc-4qmkp [2.310135113s]
Jul  1 12:47:45.633: INFO: Created: latency-svc-28c9z
Jul  1 12:47:45.685: INFO: Got endpoints: latency-svc-28c9z [2.317677147s]
Jul  1 12:47:45.715: INFO: Created: latency-svc-gcpgl
Jul  1 12:47:45.741: INFO: Got endpoints: latency-svc-gcpgl [2.32417689s]
Jul  1 12:47:45.776: INFO: Created: latency-svc-5lrdm
Jul  1 12:47:45.828: INFO: Got endpoints: latency-svc-5lrdm [2.344123632s]
Jul  1 12:47:45.842: INFO: Created: latency-svc-qhg9w
Jul  1 12:47:45.847: INFO: Got endpoints: latency-svc-qhg9w [2.208529482s]
Jul  1 12:47:45.891: INFO: Created: latency-svc-6qlj2
Jul  1 12:47:45.907: INFO: Got endpoints: latency-svc-6qlj2 [2.225663176s]
Jul  1 12:47:46.140: INFO: Created: latency-svc-jcb6j
Jul  1 12:47:46.143: INFO: Got endpoints: latency-svc-jcb6j [2.425522376s]
Jul  1 12:47:46.375: INFO: Created: latency-svc-bx96x
Jul  1 12:47:46.387: INFO: Got endpoints: latency-svc-bx96x [2.582653912s]
Jul  1 12:47:46.419: INFO: Created: latency-svc-vhp7x
Jul  1 12:47:46.436: INFO: Got endpoints: latency-svc-vhp7x [2.597190137s]
Jul  1 12:47:46.505: INFO: Created: latency-svc-28qvg
Jul  1 12:47:46.508: INFO: Got endpoints: latency-svc-28qvg [2.622723798s]
Jul  1 12:47:46.538: INFO: Created: latency-svc-n8q6k
Jul  1 12:47:46.550: INFO: Got endpoints: latency-svc-n8q6k [2.48852718s]
Jul  1 12:47:46.567: INFO: Created: latency-svc-k44xb
Jul  1 12:47:46.580: INFO: Got endpoints: latency-svc-k44xb [2.187471063s]
Jul  1 12:47:46.600: INFO: Created: latency-svc-6ptdg
Jul  1 12:47:46.643: INFO: Got endpoints: latency-svc-6ptdg [1.779643975s]
Jul  1 12:47:46.646: INFO: Created: latency-svc-jd5bs
Jul  1 12:47:46.677: INFO: Got endpoints: latency-svc-jd5bs [1.56676875s]
Jul  1 12:47:46.711: INFO: Created: latency-svc-ck9wk
Jul  1 12:47:46.725: INFO: Got endpoints: latency-svc-ck9wk [1.418204906s]
Jul  1 12:47:46.810: INFO: Created: latency-svc-8rd7p
Jul  1 12:47:46.822: INFO: Got endpoints: latency-svc-8rd7p [1.244730572s]
Jul  1 12:47:46.856: INFO: Created: latency-svc-6r4wt
Jul  1 12:47:46.870: INFO: Got endpoints: latency-svc-6r4wt [1.185021063s]
Jul  1 12:47:47.165: INFO: Created: latency-svc-g8tkf
Jul  1 12:47:47.174: INFO: Got endpoints: latency-svc-g8tkf [1.432357631s]
Jul  1 12:47:47.307: INFO: Created: latency-svc-v9c8w
Jul  1 12:47:47.356: INFO: Created: latency-svc-mnmrw
Jul  1 12:47:47.356: INFO: Got endpoints: latency-svc-v9c8w [1.527544975s]
Jul  1 12:47:47.402: INFO: Got endpoints: latency-svc-mnmrw [1.55521918s]
Jul  1 12:47:47.462: INFO: Created: latency-svc-rdwfr
Jul  1 12:47:47.471: INFO: Got endpoints: latency-svc-rdwfr [1.563224718s]
Jul  1 12:47:47.507: INFO: Created: latency-svc-n6f4p
Jul  1 12:47:47.523: INFO: Got endpoints: latency-svc-n6f4p [1.379922812s]
Jul  1 12:47:47.595: INFO: Created: latency-svc-ntrvc
Jul  1 12:47:47.610: INFO: Got endpoints: latency-svc-ntrvc [1.222809576s]
Jul  1 12:47:47.637: INFO: Created: latency-svc-tvpbd
Jul  1 12:47:47.652: INFO: Got endpoints: latency-svc-tvpbd [1.216286518s]
Jul  1 12:47:47.671: INFO: Created: latency-svc-9j224
Jul  1 12:47:47.733: INFO: Got endpoints: latency-svc-9j224 [1.224977074s]
Jul  1 12:47:47.768: INFO: Created: latency-svc-h4s8s
Jul  1 12:47:47.778: INFO: Got endpoints: latency-svc-h4s8s [1.22834149s]
Jul  1 12:47:47.798: INFO: Created: latency-svc-w22dk
Jul  1 12:47:47.815: INFO: Got endpoints: latency-svc-w22dk [1.234700088s]
Jul  1 12:47:47.882: INFO: Created: latency-svc-v8xbf
Jul  1 12:47:47.899: INFO: Got endpoints: latency-svc-v8xbf [1.255967899s]
Jul  1 12:47:47.925: INFO: Created: latency-svc-fwmh2
Jul  1 12:47:47.953: INFO: Got endpoints: latency-svc-fwmh2 [1.275981418s]
Jul  1 12:47:48.021: INFO: Created: latency-svc-h6mdr
Jul  1 12:47:48.050: INFO: Got endpoints: latency-svc-h6mdr [1.324749682s]
Jul  1 12:47:48.050: INFO: Created: latency-svc-w6bfr
Jul  1 12:47:48.074: INFO: Got endpoints: latency-svc-w6bfr [1.252567209s]
Jul  1 12:47:48.117: INFO: Created: latency-svc-6jntr
Jul  1 12:47:48.151: INFO: Got endpoints: latency-svc-6jntr [1.28168754s]
Jul  1 12:47:48.181: INFO: Created: latency-svc-wmcgn
Jul  1 12:47:48.237: INFO: Got endpoints: latency-svc-wmcgn [1.063248731s]
Jul  1 12:47:48.296: INFO: Created: latency-svc-rb5wk
Jul  1 12:47:48.320: INFO: Created: latency-svc-ttrvz
Jul  1 12:47:48.320: INFO: Got endpoints: latency-svc-rb5wk [964.720152ms]
Jul  1 12:47:48.344: INFO: Got endpoints: latency-svc-ttrvz [941.731234ms]
Jul  1 12:47:48.386: INFO: Created: latency-svc-jq9g9
Jul  1 12:47:48.469: INFO: Got endpoints: latency-svc-jq9g9 [998.637211ms]
Jul  1 12:47:48.472: INFO: Created: latency-svc-tqkmz
Jul  1 12:47:48.483: INFO: Got endpoints: latency-svc-tqkmz [960.483978ms]
Jul  1 12:47:48.507: INFO: Created: latency-svc-j9hpx
Jul  1 12:47:48.520: INFO: Got endpoints: latency-svc-j9hpx [909.996031ms]
Jul  1 12:47:48.537: INFO: Created: latency-svc-9sjlj
Jul  1 12:47:48.550: INFO: Got endpoints: latency-svc-9sjlj [897.848681ms]
Jul  1 12:47:48.637: INFO: Created: latency-svc-55rp9
Jul  1 12:47:48.655: INFO: Got endpoints: latency-svc-55rp9 [921.959851ms]
Jul  1 12:47:48.685: INFO: Created: latency-svc-v4mf5
Jul  1 12:47:48.699: INFO: Got endpoints: latency-svc-v4mf5 [920.509355ms]
Jul  1 12:47:48.699: INFO: Latencies: [878.341629ms 897.848681ms 903.366986ms 909.996031ms 912.563856ms 918.510025ms 920.509355ms 921.959851ms 932.58457ms 937.42953ms 941.033538ms 941.731234ms 944.461604ms 946.048589ms 948.961754ms 951.708728ms 952.181777ms 952.941741ms 954.290902ms 954.60944ms 958.834381ms 960.483978ms 962.583432ms 964.720152ms 965.487883ms 971.505402ms 979.627558ms 984.666094ms 984.977852ms 985.498523ms 985.808057ms 989.014995ms 992.441524ms 994.914157ms 996.079548ms 998.637211ms 1.001048419s 1.005148565s 1.008990344s 1.01176701s 1.01279664s 1.013312594s 1.019241003s 1.019357146s 1.02838077s 1.033599547s 1.035922832s 1.036044144s 1.038546833s 1.046307239s 1.052477866s 1.054130721s 1.054159148s 1.054383634s 1.055816953s 1.055864573s 1.062589697s 1.063092028s 1.063248731s 1.066275103s 1.067292172s 1.06734861s 1.068987771s 1.077380428s 1.079001725s 1.081954296s 1.08767055s 1.090068341s 1.106344969s 1.110607679s 1.110949087s 1.132637908s 1.13610143s 1.136965823s 1.14246991s 1.142936721s 1.15522945s 1.179270882s 1.180224408s 1.185021063s 1.188795441s 1.206410519s 1.216286518s 1.218655019s 1.221845882s 1.222809576s 1.224977074s 1.22834149s 1.234700088s 1.239013109s 1.244730572s 1.247619886s 1.252567209s 1.253940734s 1.255967899s 1.275981418s 1.28168754s 1.283877663s 1.297893356s 1.307015746s 1.316882761s 1.323960483s 1.324749682s 1.351175525s 1.379922812s 1.418204906s 1.432357631s 1.432756356s 1.439148737s 1.453966936s 1.464999666s 1.480004643s 1.486891809s 1.487780537s 1.491129097s 1.492152532s 1.501267991s 1.503095288s 1.515629134s 1.515689894s 1.520473627s 1.527544975s 1.55521918s 1.557360489s 1.560033005s 1.563224718s 1.56676875s 1.659122084s 1.68895212s 1.770740031s 1.779643975s 1.812280399s 1.896509217s 1.899945632s 1.909108516s 1.944397558s 1.976416952s 1.98176839s 1.994446324s 1.999780874s 2.020135974s 2.047935685s 2.073158199s 2.083342406s 2.106401138s 2.148009842s 2.187471063s 2.208529482s 2.225663176s 2.227996291s 2.310135113s 2.317677147s 2.32417689s 2.337480725s 2.344123632s 2.410999904s 2.425522376s 2.452500687s 2.48852718s 2.582653912s 2.584970025s 2.597190137s 2.622723798s 2.740440373s 2.743337788s 2.79062377s 2.854225111s 2.92115262s 2.959681908s 2.975073128s 3.003490642s 3.072980397s 3.159204632s 3.181560408s 3.191113187s 3.198534667s 3.20024019s 3.243204411s 3.24657206s 3.247279543s 3.248641462s 3.265151026s 3.287752753s 3.299451857s 3.775584649s 4.103168309s 4.594666131s 4.828260375s 5.005587246s 5.496577997s 5.983458287s 6.001865442s 6.001940148s 6.063736682s 6.169175611s 6.248753918s 6.360536458s 6.434303163s 6.520517497s 6.543507103s]
Jul  1 12:47:48.699: INFO: 50 %ile: 1.316882761s
Jul  1 12:47:48.699: INFO: 90 %ile: 3.248641462s
Jul  1 12:47:48.699: INFO: 99 %ile: 6.520517497s
Jul  1 12:47:48.699: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul  1 12:47:48.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-7217" for this suite.

• [SLOW TEST:32.169 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":275,"completed":274,"skipped":4711,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}
SSSSSSJul  1 12:47:48.720: INFO: Running AfterSuite actions on all nodes
Jul  1 12:47:48.720: INFO: Running AfterSuite actions on node 1
Jul  1 12:47:48.720: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":274,"skipped":4717,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] should perform canary updates and phased rolling updates of template modifications [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:74

Ran 275 of 4992 Specs in 7051.940 seconds
FAIL! -- 274 Passed | 1 Failed | 0 Pending | 4717 Skipped
--- FAIL: TestE2E (7052.02s)
FAIL