I0704 08:10:29.157663 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0704 08:10:29.157920 6 e2e.go:109] Starting e2e run "495c0ca3-30ba-4919-ac44-c0ef702cd874" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1593850228 - Will randomize all specs Will run 278 of 4843 specs Jul 4 08:10:29.211: INFO: >>> kubeConfig: /root/.kube/config Jul 4 08:10:29.215: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 4 08:10:29.242: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 4 08:10:29.271: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 4 08:10:29.271: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 4 08:10:29.271: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 4 08:10:29.283: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 4 08:10:29.283: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 4 08:10:29.283: INFO: e2e test version: v1.17.8 Jul 4 08:10:29.285: INFO: kube-apiserver version: v1.17.5 Jul 4 08:10:29.285: INFO: >>> kubeConfig: /root/.kube/config Jul 4 08:10:29.289: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:10:29.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns Jul 4 08:10:29.386: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3609 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3609;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3609 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3609;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3609.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3609.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3609.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3609.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3609.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3609.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3609.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3609.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3609.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3609.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3609.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3609.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3609.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 56.235.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.235.56_udp@PTR;check="$$(dig +tcp +noall +answer +search 56.235.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.235.56_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3609 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3609;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3609 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3609;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3609.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3609.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3609.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3609.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3609.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3609.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3609.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3609.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3609.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3609.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3609.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3609.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3609.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 56.235.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.235.56_udp@PTR;check="$$(dig +tcp +noall +answer +search 56.235.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.235.56_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 4 08:10:53.477: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.480: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.483: INFO: Unable to read wheezy_udp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.486: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.489: INFO: Unable to read wheezy_udp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.492: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.498: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.510: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.531: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.534: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.537: INFO: Unable to read jessie_udp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.540: INFO: Unable to read jessie_tcp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.543: INFO: Unable to read jessie_udp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.546: INFO: Unable to read jessie_tcp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.550: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.553: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.574: INFO: Lookups using dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3609 wheezy_tcp@dns-test-service.dns-3609 wheezy_udp@dns-test-service.dns-3609.svc wheezy_tcp@dns-test-service.dns-3609.svc wheezy_udp@_http._tcp.dns-test-service.dns-3609.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3609.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3609 jessie_tcp@dns-test-service.dns-3609 jessie_udp@dns-test-service.dns-3609.svc jessie_tcp@dns-test-service.dns-3609.svc jessie_udp@_http._tcp.dns-test-service.dns-3609.svc jessie_tcp@_http._tcp.dns-test-service.dns-3609.svc] Jul 4 08:10:58.579: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.582: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.585: INFO: Unable to read wheezy_udp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.587: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.590: INFO: Unable to read wheezy_udp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.593: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.595: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.598: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.620: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.623: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.626: INFO: Unable to read jessie_udp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.629: INFO: Unable to read jessie_tcp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.637: INFO: Unable to read jessie_udp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.640: INFO: Unable to read jessie_tcp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.643: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.645: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.658: INFO: Lookups using dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3609 wheezy_tcp@dns-test-service.dns-3609 wheezy_udp@dns-test-service.dns-3609.svc wheezy_tcp@dns-test-service.dns-3609.svc wheezy_udp@_http._tcp.dns-test-service.dns-3609.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3609.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3609 jessie_tcp@dns-test-service.dns-3609 jessie_udp@dns-test-service.dns-3609.svc jessie_tcp@dns-test-service.dns-3609.svc jessie_udp@_http._tcp.dns-test-service.dns-3609.svc jessie_tcp@_http._tcp.dns-test-service.dns-3609.svc] Jul 4 08:11:03.588: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.599: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.604: INFO: Unable to read wheezy_udp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.606: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.608: INFO: Unable to read wheezy_udp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.611: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.613: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.615: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.634: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.636: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.639: INFO: Unable to read jessie_udp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.642: INFO: Unable to read jessie_tcp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.644: INFO: Unable to read jessie_udp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.647: INFO: Unable to read jessie_tcp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.648: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.651: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.665: INFO: Lookups using dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3609 wheezy_tcp@dns-test-service.dns-3609 wheezy_udp@dns-test-service.dns-3609.svc wheezy_tcp@dns-test-service.dns-3609.svc wheezy_udp@_http._tcp.dns-test-service.dns-3609.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3609.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3609 jessie_tcp@dns-test-service.dns-3609 jessie_udp@dns-test-service.dns-3609.svc jessie_tcp@dns-test-service.dns-3609.svc jessie_udp@_http._tcp.dns-test-service.dns-3609.svc jessie_tcp@_http._tcp.dns-test-service.dns-3609.svc] Jul 4 08:11:08.578: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.581: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.592: INFO: Unable to read wheezy_udp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.598: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.600: INFO: Unable to read wheezy_udp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.602: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.605: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.607: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.624: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.626: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.628: INFO: Unable to read jessie_udp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.630: INFO: Unable to read jessie_tcp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.632: INFO: Unable to read jessie_udp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.635: INFO: Unable to read jessie_tcp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.637: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.640: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.678: INFO: Lookups using dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3609 wheezy_tcp@dns-test-service.dns-3609 wheezy_udp@dns-test-service.dns-3609.svc wheezy_tcp@dns-test-service.dns-3609.svc wheezy_udp@_http._tcp.dns-test-service.dns-3609.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3609.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3609 jessie_tcp@dns-test-service.dns-3609 jessie_udp@dns-test-service.dns-3609.svc jessie_tcp@dns-test-service.dns-3609.svc jessie_udp@_http._tcp.dns-test-service.dns-3609.svc jessie_tcp@_http._tcp.dns-test-service.dns-3609.svc] Jul 4 08:11:13.670: INFO: DNS probes using dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:11:14.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3609" for this suite. • [SLOW TEST:45.591 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":1,"skipped":6,"failed":0} SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:11:14.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 4 08:11:16.601: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:11:29.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9323" for this suite.

• [SLOW TEST:12.669 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":76,"failed":0}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:11:29.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jul  4 08:11:29.500: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  4 08:11:29.522: INFO: Waiting for terminating namespaces to be deleted...
Jul  4 08:11:29.525: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Jul  4 08:11:29.546: INFO: kube-proxy-8sp85 from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  4 08:11:29.546: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  4 08:11:29.546: INFO: kindnet-gnxwn from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  4 08:11:29.546: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  4 08:11:29.546: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Jul  4 08:11:29.575: INFO: kube-proxy-b2ncl from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  4 08:11:29.575: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  4 08:11:29.575: INFO: kindnet-qg8qr from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  4 08:11:29.575: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  4 08:11:29.575: INFO: bin-falseb99ac0ab-6742-4cb8-93c1-49fb79ae1762 from kubelet-test-9323 started at 2020-07-04 08:11:17 +0000 UTC (1 container statuses recorded)
Jul  4 08:11:29.575: INFO: 	Container bin-falseb99ac0ab-6742-4cb8-93c1-49fb79ae1762 ready: false, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-worker
STEP: verifying the node has the label node jerma-worker2
Jul  4 08:11:29.628: INFO: Pod kindnet-gnxwn requesting resource cpu=100m on Node jerma-worker
Jul  4 08:11:29.628: INFO: Pod kindnet-qg8qr requesting resource cpu=100m on Node jerma-worker2
Jul  4 08:11:29.628: INFO: Pod kube-proxy-8sp85 requesting resource cpu=0m on Node jerma-worker
Jul  4 08:11:29.628: INFO: Pod kube-proxy-b2ncl requesting resource cpu=0m on Node jerma-worker2
STEP: Starting Pods to consume most of the cluster CPU.
Jul  4 08:11:29.628: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker
Jul  4 08:11:29.634: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-74571712-cb17-42a9-a258-dfb8b7661e7c.161e7e64d34569cf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1201/filler-pod-74571712-cb17-42a9-a258-dfb8b7661e7c to jerma-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-74571712-cb17-42a9-a258-dfb8b7661e7c.161e7e651ef929e4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-74571712-cb17-42a9-a258-dfb8b7661e7c.161e7e656e4bd128], Reason = [Created], Message = [Created container filler-pod-74571712-cb17-42a9-a258-dfb8b7661e7c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-74571712-cb17-42a9-a258-dfb8b7661e7c.161e7e659029cb25], Reason = [Started], Message = [Started container filler-pod-74571712-cb17-42a9-a258-dfb8b7661e7c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-898a089e-1ae9-4b4e-a63b-ad4766b40f66.161e7e64d4e30f17], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1201/filler-pod-898a089e-1ae9-4b4e-a63b-ad4766b40f66 to jerma-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-898a089e-1ae9-4b4e-a63b-ad4766b40f66.161e7e656f816570], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-898a089e-1ae9-4b4e-a63b-ad4766b40f66.161e7e65dd0bb98c], Reason = [Created], Message = [Created container filler-pod-898a089e-1ae9-4b4e-a63b-ad4766b40f66]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-898a089e-1ae9-4b4e-a63b-ad4766b40f66.161e7e65edddf061], Reason = [Started], Message = [Started container filler-pod-898a089e-1ae9-4b4e-a63b-ad4766b40f66]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.161e7e663bdc8f73], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:11:36.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1201" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:7.292 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":4,"skipped":84,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:11:36.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-f60580e3-50be-4faf-99cc-0266a23b2ba9
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-f60580e3-50be-4faf-99cc-0266a23b2ba9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:13:01.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7582" for this suite.

• [SLOW TEST:84.718 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":88,"failed":0}
S
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:13:01.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jul  4 08:13:01.586: INFO: Waiting up to 5m0s for pod "downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542" in namespace "downward-api-460" to be "success or failure"
Jul  4 08:13:01.603: INFO: Pod "downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542": Phase="Pending", Reason="", readiness=false. Elapsed: 17.137328ms
Jul  4 08:13:03.608: INFO: Pod "downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022057229s
Jul  4 08:13:05.612: INFO: Pod "downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026410235s
Jul  4 08:13:07.679: INFO: Pod "downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093490964s
Jul  4 08:13:09.683: INFO: Pod "downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.097049672s
STEP: Saw pod success
Jul  4 08:13:09.683: INFO: Pod "downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542" satisfied condition "success or failure"
Jul  4 08:13:09.685: INFO: Trying to get logs from node jerma-worker pod downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542 container dapi-container: 
STEP: delete the pod
Jul  4 08:13:09.725: INFO: Waiting for pod downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542 to disappear
Jul  4 08:13:09.746: INFO: Pod downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:13:09.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-460" for this suite.

• [SLOW TEST:8.394 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":89,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:13:09.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Jul  4 08:13:10.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-651 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jul  4 08:13:17.471: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0704 08:13:17.421516      30 log.go:172] (0xc000b344d0) (0xc000b18140) Create stream\nI0704 08:13:17.421581      30 log.go:172] (0xc000b344d0) (0xc000b18140) Stream added, broadcasting: 1\nI0704 08:13:17.423739      30 log.go:172] (0xc000b344d0) Reply frame received for 1\nI0704 08:13:17.423766      30 log.go:172] (0xc000b344d0) (0xc00032f360) Create stream\nI0704 08:13:17.423774      30 log.go:172] (0xc000b344d0) (0xc00032f360) Stream added, broadcasting: 3\nI0704 08:13:17.424539      30 log.go:172] (0xc000b344d0) Reply frame received for 3\nI0704 08:13:17.424572      30 log.go:172] (0xc000b344d0) (0xc00032f400) Create stream\nI0704 08:13:17.424582      30 log.go:172] (0xc000b344d0) (0xc00032f400) Stream added, broadcasting: 5\nI0704 08:13:17.425541      30 log.go:172] (0xc000b344d0) Reply frame received for 5\nI0704 08:13:17.425568      30 log.go:172] (0xc000b344d0) (0xc000b181e0) Create stream\nI0704 08:13:17.425579      30 log.go:172] (0xc000b344d0) (0xc000b181e0) Stream added, broadcasting: 7\nI0704 08:13:17.426388      30 log.go:172] (0xc000b344d0) Reply frame received for 7\nI0704 08:13:17.426499      30 log.go:172] (0xc00032f360) (3) Writing data frame\nI0704 08:13:17.426562      30 log.go:172] (0xc00032f360) (3) Writing data frame\nI0704 08:13:17.427266      30 log.go:172] (0xc000b344d0) Data frame received for 5\nI0704 08:13:17.427278      30 log.go:172] (0xc00032f400) (5) Data frame handling\nI0704 08:13:17.427290      30 log.go:172] (0xc00032f400) (5) Data frame sent\nI0704 08:13:17.427803      30 log.go:172] (0xc000b344d0) Data frame received for 5\nI0704 08:13:17.427817      30 log.go:172] (0xc00032f400) (5) Data frame handling\nI0704 08:13:17.427829      30 log.go:172] (0xc00032f400) (5) Data frame sent\nI0704 08:13:17.450575      30 log.go:172] (0xc000b344d0) Data frame received for 5\nI0704 08:13:17.450600      30 log.go:172] (0xc00032f400) (5) Data frame handling\nI0704 08:13:17.450635      30 log.go:172] (0xc000b344d0) Data frame received for 7\nI0704 08:13:17.450666      30 log.go:172] (0xc000b181e0) (7) Data frame handling\nI0704 08:13:17.451117      30 log.go:172] (0xc000b344d0) (0xc00032f360) Stream removed, broadcasting: 3\nI0704 08:13:17.451158      30 log.go:172] (0xc000b344d0) Data frame received for 1\nI0704 08:13:17.451174      30 log.go:172] (0xc000b18140) (1) Data frame handling\nI0704 08:13:17.451197      30 log.go:172] (0xc000b18140) (1) Data frame sent\nI0704 08:13:17.451219      30 log.go:172] (0xc000b344d0) (0xc000b18140) Stream removed, broadcasting: 1\nI0704 08:13:17.451236      30 log.go:172] (0xc000b344d0) Go away received\nI0704 08:13:17.451645      30 log.go:172] (0xc000b344d0) (0xc000b18140) Stream removed, broadcasting: 1\nI0704 08:13:17.451692      30 log.go:172] (0xc000b344d0) (0xc00032f360) Stream removed, broadcasting: 3\nI0704 08:13:17.451709      30 log.go:172] (0xc000b344d0) (0xc00032f400) Stream removed, broadcasting: 5\nI0704 08:13:17.451727      30 log.go:172] (0xc000b344d0) (0xc000b181e0) Stream removed, broadcasting: 7\n"
Jul  4 08:13:17.471: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:13:19.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-651" for this suite.

• [SLOW TEST:9.628 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":278,"completed":7,"skipped":101,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:13:19.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul  4 08:13:19.608: INFO: Waiting up to 5m0s for pod "pod-2c1f5c99-5397-4677-86e0-298e776beda3" in namespace "emptydir-3002" to be "success or failure"
Jul  4 08:13:19.612: INFO: Pod "pod-2c1f5c99-5397-4677-86e0-298e776beda3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.547542ms
Jul  4 08:13:21.616: INFO: Pod "pod-2c1f5c99-5397-4677-86e0-298e776beda3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007652943s
Jul  4 08:13:23.626: INFO: Pod "pod-2c1f5c99-5397-4677-86e0-298e776beda3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017661272s
Jul  4 08:13:25.630: INFO: Pod "pod-2c1f5c99-5397-4677-86e0-298e776beda3": Phase="Running", Reason="", readiness=true. Elapsed: 6.021297098s
Jul  4 08:13:27.638: INFO: Pod "pod-2c1f5c99-5397-4677-86e0-298e776beda3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.029306454s
STEP: Saw pod success
Jul  4 08:13:27.638: INFO: Pod "pod-2c1f5c99-5397-4677-86e0-298e776beda3" satisfied condition "success or failure"
Jul  4 08:13:27.640: INFO: Trying to get logs from node jerma-worker pod pod-2c1f5c99-5397-4677-86e0-298e776beda3 container test-container: 
STEP: delete the pod
Jul  4 08:13:27.658: INFO: Waiting for pod pod-2c1f5c99-5397-4677-86e0-298e776beda3 to disappear
Jul  4 08:13:27.662: INFO: Pod pod-2c1f5c99-5397-4677-86e0-298e776beda3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:13:27.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3002" for this suite.

• [SLOW TEST:8.185 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":155,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:13:27.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Jul  4 08:13:27.718: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:13:27.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8290" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":9,"skipped":215,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:13:27.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-da12604f-a114-4519-981e-8b576fb52e44
STEP: Creating a pod to test consume secrets
Jul  4 08:13:27.904: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-14d653b0-a649-44d3-8c5f-92b176390ba2" in namespace "projected-2153" to be "success or failure"
Jul  4 08:13:27.921: INFO: Pod "pod-projected-secrets-14d653b0-a649-44d3-8c5f-92b176390ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.524247ms
Jul  4 08:13:29.925: INFO: Pod "pod-projected-secrets-14d653b0-a649-44d3-8c5f-92b176390ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021873782s
Jul  4 08:13:31.929: INFO: Pod "pod-projected-secrets-14d653b0-a649-44d3-8c5f-92b176390ba2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025295216s
STEP: Saw pod success
Jul  4 08:13:31.929: INFO: Pod "pod-projected-secrets-14d653b0-a649-44d3-8c5f-92b176390ba2" satisfied condition "success or failure"
Jul  4 08:13:31.931: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-14d653b0-a649-44d3-8c5f-92b176390ba2 container projected-secret-volume-test: 
STEP: delete the pod
Jul  4 08:13:31.980: INFO: Waiting for pod pod-projected-secrets-14d653b0-a649-44d3-8c5f-92b176390ba2 to disappear
Jul  4 08:13:31.998: INFO: Pod pod-projected-secrets-14d653b0-a649-44d3-8c5f-92b176390ba2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:13:31.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2153" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":226,"failed":0}
SSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:13:32.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul  4 08:13:43.158: INFO: Successfully updated pod "pod-update-0a8862f3-7548-4c4d-bb25-8352b9aa7a8c"
STEP: verifying the updated pod is in kubernetes
Jul  4 08:13:43.171: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:13:43.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2786" for this suite.

• [SLOW TEST:11.173 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":231,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:13:43.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  4 08:13:43.744: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  4 08:13:45.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:13:47.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:13:50.433: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:13:51.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:13:53.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  4 08:13:56.806: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Jul  4 08:13:56.831: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:13:56.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7217" for this suite.
STEP: Destroying namespace "webhook-7217-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.750 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":12,"skipped":234,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:13:56.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jul  4 08:14:01.578: INFO: Successfully updated pod "annotationupdate80fb40e9-0e0c-451d-8103-a0fc359c10a8"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:14:05.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6855" for this suite.

• [SLOW TEST:8.695 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":243,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:14:05.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 08:14:05.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul  4 08:14:08.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7592 create -f -'
Jul  4 08:14:13.482: INFO: stderr: ""
Jul  4 08:14:13.482: INFO: stdout: "e2e-test-crd-publish-openapi-4104-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jul  4 08:14:13.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7592 delete e2e-test-crd-publish-openapi-4104-crds test-cr'
Jul  4 08:14:13.691: INFO: stderr: ""
Jul  4 08:14:13.691: INFO: stdout: "e2e-test-crd-publish-openapi-4104-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Jul  4 08:14:13.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7592 apply -f -'
Jul  4 08:14:14.563: INFO: stderr: ""
Jul  4 08:14:14.563: INFO: stdout: "e2e-test-crd-publish-openapi-4104-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jul  4 08:14:14.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7592 delete e2e-test-crd-publish-openapi-4104-crds test-cr'
Jul  4 08:14:14.770: INFO: stderr: ""
Jul  4 08:14:14.770: INFO: stdout: "e2e-test-crd-publish-openapi-4104-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Jul  4 08:14:14.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4104-crds'
Jul  4 08:14:15.135: INFO: stderr: ""
Jul  4 08:14:15.135: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4104-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:14:17.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7592" for this suite.

• [SLOW TEST:12.379 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":14,"skipped":287,"failed":0}
S
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:14:18.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:14:34.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5386" for this suite.

• [SLOW TEST:16.163 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":15,"skipped":288,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:14:34.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul  4 08:14:34.246: INFO: Waiting up to 5m0s for pod "pod-1b664330-e28a-4fdd-8240-d5c06addba6e" in namespace "emptydir-46" to be "success or failure"
Jul  4 08:14:34.250: INFO: Pod "pod-1b664330-e28a-4fdd-8240-d5c06addba6e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.013909ms
Jul  4 08:14:36.255: INFO: Pod "pod-1b664330-e28a-4fdd-8240-d5c06addba6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008086848s
Jul  4 08:14:38.259: INFO: Pod "pod-1b664330-e28a-4fdd-8240-d5c06addba6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012075492s
STEP: Saw pod success
Jul  4 08:14:38.259: INFO: Pod "pod-1b664330-e28a-4fdd-8240-d5c06addba6e" satisfied condition "success or failure"
Jul  4 08:14:38.262: INFO: Trying to get logs from node jerma-worker2 pod pod-1b664330-e28a-4fdd-8240-d5c06addba6e container test-container: 
STEP: delete the pod
Jul  4 08:14:38.276: INFO: Waiting for pod pod-1b664330-e28a-4fdd-8240-d5c06addba6e to disappear
Jul  4 08:14:38.345: INFO: Pod pod-1b664330-e28a-4fdd-8240-d5c06addba6e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:14:38.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-46" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":295,"failed":0}
SS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:14:38.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-8423/configmap-test-b362fe9d-10ba-4f17-a831-238c1a556af9
STEP: Creating a pod to test consume configMaps
Jul  4 08:14:38.741: INFO: Waiting up to 5m0s for pod "pod-configmaps-1fe1117d-c1b8-4cf2-943e-269697511a78" in namespace "configmap-8423" to be "success or failure"
Jul  4 08:14:38.749: INFO: Pod "pod-configmaps-1fe1117d-c1b8-4cf2-943e-269697511a78": Phase="Pending", Reason="", readiness=false. Elapsed: 7.583066ms
Jul  4 08:14:40.753: INFO: Pod "pod-configmaps-1fe1117d-c1b8-4cf2-943e-269697511a78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011815825s
Jul  4 08:14:42.757: INFO: Pod "pod-configmaps-1fe1117d-c1b8-4cf2-943e-269697511a78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015933496s
STEP: Saw pod success
Jul  4 08:14:42.757: INFO: Pod "pod-configmaps-1fe1117d-c1b8-4cf2-943e-269697511a78" satisfied condition "success or failure"
Jul  4 08:14:42.760: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-1fe1117d-c1b8-4cf2-943e-269697511a78 container env-test: 
STEP: delete the pod
Jul  4 08:14:42.886: INFO: Waiting for pod pod-configmaps-1fe1117d-c1b8-4cf2-943e-269697511a78 to disappear
Jul  4 08:14:42.892: INFO: Pod pod-configmaps-1fe1117d-c1b8-4cf2-943e-269697511a78 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:14:42.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8423" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":297,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:14:42.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-9705
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul  4 08:14:42.955: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul  4 08:15:13.128: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.8:8080/dial?request=hostname&protocol=http&host=10.244.1.7&port=8080&tries=1'] Namespace:pod-network-test-9705 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 08:15:13.128: INFO: >>> kubeConfig: /root/.kube/config
I0704 08:15:13.165444       6 log.go:172] (0xc004d6c4d0) (0xc0016ca960) Create stream
I0704 08:15:13.165486       6 log.go:172] (0xc004d6c4d0) (0xc0016ca960) Stream added, broadcasting: 1
I0704 08:15:13.167539       6 log.go:172] (0xc004d6c4d0) Reply frame received for 1
I0704 08:15:13.167585       6 log.go:172] (0xc004d6c4d0) (0xc001d8e000) Create stream
I0704 08:15:13.167601       6 log.go:172] (0xc004d6c4d0) (0xc001d8e000) Stream added, broadcasting: 3
I0704 08:15:13.168629       6 log.go:172] (0xc004d6c4d0) Reply frame received for 3
I0704 08:15:13.168657       6 log.go:172] (0xc004d6c4d0) (0xc00199d360) Create stream
I0704 08:15:13.168668       6 log.go:172] (0xc004d6c4d0) (0xc00199d360) Stream added, broadcasting: 5
I0704 08:15:13.169813       6 log.go:172] (0xc004d6c4d0) Reply frame received for 5
I0704 08:15:13.246894       6 log.go:172] (0xc004d6c4d0) Data frame received for 3
I0704 08:15:13.246924       6 log.go:172] (0xc001d8e000) (3) Data frame handling
I0704 08:15:13.246943       6 log.go:172] (0xc001d8e000) (3) Data frame sent
I0704 08:15:13.247822       6 log.go:172] (0xc004d6c4d0) Data frame received for 3
I0704 08:15:13.247854       6 log.go:172] (0xc001d8e000) (3) Data frame handling
I0704 08:15:13.247869       6 log.go:172] (0xc004d6c4d0) Data frame received for 5
I0704 08:15:13.247877       6 log.go:172] (0xc00199d360) (5) Data frame handling
I0704 08:15:13.249632       6 log.go:172] (0xc004d6c4d0) Data frame received for 1
I0704 08:15:13.249661       6 log.go:172] (0xc0016ca960) (1) Data frame handling
I0704 08:15:13.249684       6 log.go:172] (0xc0016ca960) (1) Data frame sent
I0704 08:15:13.249700       6 log.go:172] (0xc004d6c4d0) (0xc0016ca960) Stream removed, broadcasting: 1
I0704 08:15:13.249805       6 log.go:172] (0xc004d6c4d0) Go away received
I0704 08:15:13.250191       6 log.go:172] (0xc004d6c4d0) (0xc0016ca960) Stream removed, broadcasting: 1
I0704 08:15:13.250213       6 log.go:172] (0xc004d6c4d0) (0xc001d8e000) Stream removed, broadcasting: 3
I0704 08:15:13.250226       6 log.go:172] (0xc004d6c4d0) (0xc00199d360) Stream removed, broadcasting: 5
Jul  4 08:15:13.250: INFO: Waiting for responses: map[]
Jul  4 08:15:13.255: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.8:8080/dial?request=hostname&protocol=http&host=10.244.2.17&port=8080&tries=1'] Namespace:pod-network-test-9705 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 08:15:13.255: INFO: >>> kubeConfig: /root/.kube/config
I0704 08:15:13.289605       6 log.go:172] (0xc0051f06e0) (0xc001756d20) Create stream
I0704 08:15:13.289659       6 log.go:172] (0xc0051f06e0) (0xc001756d20) Stream added, broadcasting: 1
I0704 08:15:13.291308       6 log.go:172] (0xc0051f06e0) Reply frame received for 1
I0704 08:15:13.291359       6 log.go:172] (0xc0051f06e0) (0xc001a09360) Create stream
I0704 08:15:13.291378       6 log.go:172] (0xc0051f06e0) (0xc001a09360) Stream added, broadcasting: 3
I0704 08:15:13.292329       6 log.go:172] (0xc0051f06e0) Reply frame received for 3
I0704 08:15:13.292381       6 log.go:172] (0xc0051f06e0) (0xc00199d680) Create stream
I0704 08:15:13.292398       6 log.go:172] (0xc0051f06e0) (0xc00199d680) Stream added, broadcasting: 5
I0704 08:15:13.293848       6 log.go:172] (0xc0051f06e0) Reply frame received for 5
I0704 08:15:13.376695       6 log.go:172] (0xc0051f06e0) Data frame received for 3
I0704 08:15:13.376716       6 log.go:172] (0xc001a09360) (3) Data frame handling
I0704 08:15:13.376728       6 log.go:172] (0xc001a09360) (3) Data frame sent
I0704 08:15:13.377291       6 log.go:172] (0xc0051f06e0) Data frame received for 3
I0704 08:15:13.377314       6 log.go:172] (0xc001a09360) (3) Data frame handling
I0704 08:15:13.377772       6 log.go:172] (0xc0051f06e0) Data frame received for 5
I0704 08:15:13.377793       6 log.go:172] (0xc00199d680) (5) Data frame handling
I0704 08:15:13.379018       6 log.go:172] (0xc0051f06e0) Data frame received for 1
I0704 08:15:13.379046       6 log.go:172] (0xc001756d20) (1) Data frame handling
I0704 08:15:13.379053       6 log.go:172] (0xc001756d20) (1) Data frame sent
I0704 08:15:13.379064       6 log.go:172] (0xc0051f06e0) (0xc001756d20) Stream removed, broadcasting: 1
I0704 08:15:13.379073       6 log.go:172] (0xc0051f06e0) Go away received
I0704 08:15:13.379178       6 log.go:172] (0xc0051f06e0) (0xc001756d20) Stream removed, broadcasting: 1
I0704 08:15:13.379202       6 log.go:172] (0xc0051f06e0) (0xc001a09360) Stream removed, broadcasting: 3
I0704 08:15:13.379216       6 log.go:172] (0xc0051f06e0) (0xc00199d680) Stream removed, broadcasting: 5
Jul  4 08:15:13.379: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:15:13.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9705" for this suite.

• [SLOW TEST:30.488 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":313,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:15:13.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Jul  4 08:15:13.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:15:26.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7348" for this suite.

• [SLOW TEST:13.292 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":19,"skipped":389,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:15:26.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 08:15:26.759: INFO: Creating deployment "test-recreate-deployment"
Jul  4 08:15:26.775: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jul  4 08:15:26.788: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jul  4 08:15:28.796: INFO: Waiting deployment "test-recreate-deployment" to complete
Jul  4 08:15:28.798: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447326, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447326, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447326, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447326, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:15:30.802: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jul  4 08:15:30.809: INFO: Updating deployment test-recreate-deployment
Jul  4 08:15:30.809: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jul  4 08:15:31.486: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-8705 /apis/apps/v1/namespaces/deployment-8705/deployments/test-recreate-deployment 8d8745f8-17a5-46c4-b3f7-0c7cd8ecf693 5503 2 2020-07-04 08:15:26 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003eb8938  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-04 08:15:31 +0000 UTC,LastTransitionTime:2020-07-04 08:15:31 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-07-04 08:15:31 +0000 UTC,LastTransitionTime:2020-07-04 08:15:26 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Jul  4 08:15:31.489: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-8705 /apis/apps/v1/namespaces/deployment-8705/replicasets/test-recreate-deployment-5f94c574ff 4eafb608-0fea-43e2-9eb1-aa5e00e0e53c 5501 1 2020-07-04 08:15:30 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 8d8745f8-17a5-46c4-b3f7-0c7cd8ecf693 0xc003eb8d57 0xc003eb8d58}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003eb8dc8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul  4 08:15:31.489: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jul  4 08:15:31.490: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-8705 /apis/apps/v1/namespaces/deployment-8705/replicasets/test-recreate-deployment-799c574856 6a315daa-4aaf-42a5-b6b5-377fe4b8b57d 5491 2 2020-07-04 08:15:26 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 8d8745f8-17a5-46c4-b3f7-0c7cd8ecf693 0xc003eb8e37 0xc003eb8e38}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003eb8eb8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul  4 08:15:31.493: INFO: Pod "test-recreate-deployment-5f94c574ff-qnv47" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-qnv47 test-recreate-deployment-5f94c574ff- deployment-8705 /api/v1/namespaces/deployment-8705/pods/test-recreate-deployment-5f94c574ff-qnv47 e97b7837-c805-4922-a64e-93a7660fa950 5502 0 2020-07-04 08:15:30 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 4eafb608-0fea-43e2-9eb1-aa5e00e0e53c 0xc003eb9357 0xc003eb9358}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r67fs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r67fs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r67fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 08:15:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 08:15:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 08:15:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 08:15:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-07-04 08:15:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:15:31.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8705" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":20,"skipped":423,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:15:31.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-fa2fcb38-983a-4b3b-8bf6-1c27252785d7
STEP: Creating a pod to test consume configMaps
Jul  4 08:15:31.684: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e587e055-1a0c-43ee-ae8f-cd729c84bd1f" in namespace "projected-8265" to be "success or failure"
Jul  4 08:15:31.790: INFO: Pod "pod-projected-configmaps-e587e055-1a0c-43ee-ae8f-cd729c84bd1f": Phase="Pending", Reason="", readiness=false. Elapsed: 105.844061ms
Jul  4 08:15:33.795: INFO: Pod "pod-projected-configmaps-e587e055-1a0c-43ee-ae8f-cd729c84bd1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110298436s
Jul  4 08:15:35.813: INFO: Pod "pod-projected-configmaps-e587e055-1a0c-43ee-ae8f-cd729c84bd1f": Phase="Running", Reason="", readiness=true. Elapsed: 4.128407807s
Jul  4 08:15:37.921: INFO: Pod "pod-projected-configmaps-e587e055-1a0c-43ee-ae8f-cd729c84bd1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.236804786s
STEP: Saw pod success
Jul  4 08:15:37.921: INFO: Pod "pod-projected-configmaps-e587e055-1a0c-43ee-ae8f-cd729c84bd1f" satisfied condition "success or failure"
Jul  4 08:15:37.924: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-e587e055-1a0c-43ee-ae8f-cd729c84bd1f container projected-configmap-volume-test: 
STEP: delete the pod
Jul  4 08:15:38.541: INFO: Waiting for pod pod-projected-configmaps-e587e055-1a0c-43ee-ae8f-cd729c84bd1f to disappear
Jul  4 08:15:38.546: INFO: Pod pod-projected-configmaps-e587e055-1a0c-43ee-ae8f-cd729c84bd1f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:15:38.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8265" for this suite.

• [SLOW TEST:6.989 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":442,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:15:38.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul  4 08:15:38.665: INFO: Waiting up to 5m0s for pod "pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f" in namespace "emptydir-7939" to be "success or failure"
Jul  4 08:15:38.674: INFO: Pod "pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.121758ms
Jul  4 08:15:40.677: INFO: Pod "pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01243045s
Jul  4 08:15:42.681: INFO: Pod "pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016657379s
Jul  4 08:15:44.963: INFO: Pod "pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.298474311s
Jul  4 08:15:46.967: INFO: Pod "pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f": Phase="Running", Reason="", readiness=true. Elapsed: 8.302131149s
Jul  4 08:15:48.971: INFO: Pod "pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.306335955s
STEP: Saw pod success
Jul  4 08:15:48.971: INFO: Pod "pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f" satisfied condition "success or failure"
Jul  4 08:15:48.974: INFO: Trying to get logs from node jerma-worker pod pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f container test-container: 
STEP: delete the pod
Jul  4 08:15:49.139: INFO: Waiting for pod pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f to disappear
Jul  4 08:15:49.172: INFO: Pod pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:15:49.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7939" for this suite.

• [SLOW TEST:10.626 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":453,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:15:49.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-1505753d-d1c4-44b0-955a-60d7f432faaf in namespace container-probe-88
Jul  4 08:15:53.607: INFO: Started pod liveness-1505753d-d1c4-44b0-955a-60d7f432faaf in namespace container-probe-88
STEP: checking the pod's current state and verifying that restartCount is present
Jul  4 08:15:53.712: INFO: Initial restart count of pod liveness-1505753d-d1c4-44b0-955a-60d7f432faaf is 0
Jul  4 08:16:15.771: INFO: Restart count of pod container-probe-88/liveness-1505753d-d1c4-44b0-955a-60d7f432faaf is now 1 (22.058958774s elapsed)
Jul  4 08:16:35.814: INFO: Restart count of pod container-probe-88/liveness-1505753d-d1c4-44b0-955a-60d7f432faaf is now 2 (42.102365941s elapsed)
Jul  4 08:16:55.856: INFO: Restart count of pod container-probe-88/liveness-1505753d-d1c4-44b0-955a-60d7f432faaf is now 3 (1m2.144388316s elapsed)
Jul  4 08:17:15.899: INFO: Restart count of pod container-probe-88/liveness-1505753d-d1c4-44b0-955a-60d7f432faaf is now 4 (1m22.187113487s elapsed)
Jul  4 08:18:26.803: INFO: Restart count of pod container-probe-88/liveness-1505753d-d1c4-44b0-955a-60d7f432faaf is now 5 (2m33.091144687s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:18:26.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-88" for this suite.

• [SLOW TEST:157.655 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":462,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:18:26.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jul  4 08:18:27.251: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-533 /api/v1/namespaces/watch-533/configmaps/e2e-watch-test-watch-closed 3fb8f063-ba6e-42eb-99dc-2c93e14522f1 6136 0 2020-07-04 08:18:26 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  4 08:18:27.251: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-533 /api/v1/namespaces/watch-533/configmaps/e2e-watch-test-watch-closed 3fb8f063-ba6e-42eb-99dc-2c93e14522f1 6137 0 2020-07-04 08:18:26 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jul  4 08:18:27.299: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-533 /api/v1/namespaces/watch-533/configmaps/e2e-watch-test-watch-closed 3fb8f063-ba6e-42eb-99dc-2c93e14522f1 6139 0 2020-07-04 08:18:26 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul  4 08:18:27.299: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-533 /api/v1/namespaces/watch-533/configmaps/e2e-watch-test-watch-closed 3fb8f063-ba6e-42eb-99dc-2c93e14522f1 6141 0 2020-07-04 08:18:26 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:18:27.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-533" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":24,"skipped":475,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:18:27.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  4 08:18:27.444: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a4f1895e-6edb-4959-8df9-e9cdc7aaf3ef" in namespace "projected-1025" to be "success or failure"
Jul  4 08:18:27.465: INFO: Pod "downwardapi-volume-a4f1895e-6edb-4959-8df9-e9cdc7aaf3ef": Phase="Pending", Reason="", readiness=false. Elapsed: 21.329018ms
Jul  4 08:18:29.470: INFO: Pod "downwardapi-volume-a4f1895e-6edb-4959-8df9-e9cdc7aaf3ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025728052s
Jul  4 08:18:31.474: INFO: Pod "downwardapi-volume-a4f1895e-6edb-4959-8df9-e9cdc7aaf3ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029795095s
STEP: Saw pod success
Jul  4 08:18:31.474: INFO: Pod "downwardapi-volume-a4f1895e-6edb-4959-8df9-e9cdc7aaf3ef" satisfied condition "success or failure"
Jul  4 08:18:31.477: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a4f1895e-6edb-4959-8df9-e9cdc7aaf3ef container client-container: 
STEP: delete the pod
Jul  4 08:18:31.614: INFO: Waiting for pod downwardapi-volume-a4f1895e-6edb-4959-8df9-e9cdc7aaf3ef to disappear
Jul  4 08:18:31.623: INFO: Pod downwardapi-volume-a4f1895e-6edb-4959-8df9-e9cdc7aaf3ef no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:18:31.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1025" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":478,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:18:31.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul  4 08:18:31.709: INFO: Waiting up to 5m0s for pod "pod-8c50079b-cff6-47b0-8c9a-bac788186422" in namespace "emptydir-3112" to be "success or failure"
Jul  4 08:18:31.779: INFO: Pod "pod-8c50079b-cff6-47b0-8c9a-bac788186422": Phase="Pending", Reason="", readiness=false. Elapsed: 70.039079ms
Jul  4 08:18:33.810: INFO: Pod "pod-8c50079b-cff6-47b0-8c9a-bac788186422": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100886133s
Jul  4 08:18:35.814: INFO: Pod "pod-8c50079b-cff6-47b0-8c9a-bac788186422": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105225469s
STEP: Saw pod success
Jul  4 08:18:35.815: INFO: Pod "pod-8c50079b-cff6-47b0-8c9a-bac788186422" satisfied condition "success or failure"
Jul  4 08:18:35.818: INFO: Trying to get logs from node jerma-worker pod pod-8c50079b-cff6-47b0-8c9a-bac788186422 container test-container: 
STEP: delete the pod
Jul  4 08:18:35.851: INFO: Waiting for pod pod-8c50079b-cff6-47b0-8c9a-bac788186422 to disappear
Jul  4 08:18:35.880: INFO: Pod pod-8c50079b-cff6-47b0-8c9a-bac788186422 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:18:35.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3112" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":480,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:18:35.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jul  4 08:18:36.037: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jul  4 08:18:53.171: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:18:53.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6676" for this suite.

• [SLOW TEST:17.238 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":500,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:18:53.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:18:53.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8111" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":28,"skipped":510,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:18:53.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-9ttt
STEP: Creating a pod to test atomic-volume-subpath
Jul  4 08:18:53.404: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-9ttt" in namespace "subpath-775" to be "success or failure"
Jul  4 08:18:53.412: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120847ms
Jul  4 08:18:55.416: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01181068s
Jul  4 08:18:57.418: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014424763s
Jul  4 08:18:59.451: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 6.047201387s
Jul  4 08:19:01.455: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 8.051435647s
Jul  4 08:19:03.460: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 10.055865628s
Jul  4 08:19:05.463: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 12.059341504s
Jul  4 08:19:07.466: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 14.062602264s
Jul  4 08:19:09.470: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 16.066378414s
Jul  4 08:19:11.474: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 18.069803906s
Jul  4 08:19:14.918: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 21.514574384s
Jul  4 08:19:16.921: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 23.517493274s
Jul  4 08:19:18.925: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 25.521005189s
Jul  4 08:19:20.928: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 27.524440943s
Jul  4 08:19:24.799: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 31.395487442s
Jul  4 08:19:27.546: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 34.141854672s
Jul  4 08:19:29.550: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 36.145874315s
Jul  4 08:19:31.553: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.148915399s
STEP: Saw pod success
Jul  4 08:19:31.553: INFO: Pod "pod-subpath-test-downwardapi-9ttt" satisfied condition "success or failure"
Jul  4 08:19:31.555: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-9ttt container test-container-subpath-downwardapi-9ttt: 
STEP: delete the pod
Jul  4 08:19:31.782: INFO: Waiting for pod pod-subpath-test-downwardapi-9ttt to disappear
Jul  4 08:19:31.834: INFO: Pod pod-subpath-test-downwardapi-9ttt no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-9ttt
Jul  4 08:19:31.834: INFO: Deleting pod "pod-subpath-test-downwardapi-9ttt" in namespace "subpath-775"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:19:31.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-775" for this suite.

• [SLOW TEST:38.894 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":29,"skipped":519,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:19:32.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul  4 08:19:54.943: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  4 08:19:54.952: INFO: Pod pod-with-poststart-http-hook still exists
Jul  4 08:19:56.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  4 08:19:56.955: INFO: Pod pod-with-poststart-http-hook still exists
Jul  4 08:19:58.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  4 08:19:58.956: INFO: Pod pod-with-poststart-http-hook still exists
Jul  4 08:20:00.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  4 08:20:01.051: INFO: Pod pod-with-poststart-http-hook still exists
Jul  4 08:20:02.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  4 08:20:03.955: INFO: Pod pod-with-poststart-http-hook still exists
Jul  4 08:20:04.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  4 08:20:05.456: INFO: Pod pod-with-poststart-http-hook still exists
Jul  4 08:20:06.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  4 08:20:06.956: INFO: Pod pod-with-poststart-http-hook still exists
Jul  4 08:20:08.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  4 08:20:09.781: INFO: Pod pod-with-poststart-http-hook still exists
Jul  4 08:20:10.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  4 08:20:10.956: INFO: Pod pod-with-poststart-http-hook still exists
Jul  4 08:20:12.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  4 08:20:15.645: INFO: Pod pod-with-poststart-http-hook still exists
Jul  4 08:20:16.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  4 08:20:17.794: INFO: Pod pod-with-poststart-http-hook still exists
Jul  4 08:20:18.953: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  4 08:20:19.027: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:20:19.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9017" for this suite.

• [SLOW TEST:46.848 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":570,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:20:19.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7961.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7961.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7961.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7961.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7961.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7961.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 253.53.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.53.253_udp@PTR;check="$$(dig +tcp +noall +answer +search 253.53.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.53.253_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7961.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7961.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7961.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7961.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7961.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7961.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7961.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 253.53.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.53.253_udp@PTR;check="$$(dig +tcp +noall +answer +search 253.53.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.53.253_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  4 08:20:31.853: INFO: Unable to read wheezy_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:31.855: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:31.858: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:31.861: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:31.879: INFO: Unable to read jessie_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:31.882: INFO: Unable to read jessie_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:31.884: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:31.887: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:31.902: INFO: Lookups using dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43 failed for: [wheezy_udp@dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_udp@dns-test-service.dns-7961.svc.cluster.local jessie_tcp@dns-test-service.dns-7961.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local]

Jul  4 08:20:36.906: INFO: Unable to read wheezy_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:36.908: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:36.911: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:36.914: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:36.942: INFO: Unable to read jessie_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:36.944: INFO: Unable to read jessie_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:36.947: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:36.949: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:36.966: INFO: Lookups using dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43 failed for: [wheezy_udp@dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_udp@dns-test-service.dns-7961.svc.cluster.local jessie_tcp@dns-test-service.dns-7961.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local]

Jul  4 08:20:41.906: INFO: Unable to read wheezy_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:41.909: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:41.912: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:41.915: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:41.932: INFO: Unable to read jessie_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:41.934: INFO: Unable to read jessie_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:41.936: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:41.938: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:41.950: INFO: Lookups using dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43 failed for: [wheezy_udp@dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_udp@dns-test-service.dns-7961.svc.cluster.local jessie_tcp@dns-test-service.dns-7961.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local]

Jul  4 08:20:46.906: INFO: Unable to read wheezy_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:46.908: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:46.910: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:46.913: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:47.257: INFO: Unable to read jessie_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:47.260: INFO: Unable to read jessie_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:47.263: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:47.266: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:47.280: INFO: Lookups using dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43 failed for: [wheezy_udp@dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_udp@dns-test-service.dns-7961.svc.cluster.local jessie_tcp@dns-test-service.dns-7961.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local]

Jul  4 08:20:51.906: INFO: Unable to read wheezy_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:51.908: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:51.910: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:51.913: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:51.930: INFO: Unable to read jessie_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:51.932: INFO: Unable to read jessie_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:51.934: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:51.936: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:51.948: INFO: Lookups using dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43 failed for: [wheezy_udp@dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_udp@dns-test-service.dns-7961.svc.cluster.local jessie_tcp@dns-test-service.dns-7961.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local]

Jul  4 08:20:56.906: INFO: Unable to read wheezy_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:56.943: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:56.946: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:56.954: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:57.149: INFO: Unable to read jessie_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:57.151: INFO: Unable to read jessie_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:57.154: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:57.156: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43)
Jul  4 08:20:57.170: INFO: Lookups using dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43 failed for: [wheezy_udp@dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_udp@dns-test-service.dns-7961.svc.cluster.local jessie_tcp@dns-test-service.dns-7961.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local]

Jul  4 08:21:03.188: INFO: DNS probes using dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:21:10.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7961" for this suite.

• [SLOW TEST:52.263 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":31,"skipped":576,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:21:11.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 08:21:12.322: INFO: Waiting up to 5m0s for pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba" in namespace "security-context-test-7960" to be "success or failure"
Jul  4 08:21:12.344: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba": Phase="Pending", Reason="", readiness=false. Elapsed: 21.744161ms
Jul  4 08:21:14.375: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052612644s
Jul  4 08:21:16.577: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.255200983s
Jul  4 08:21:18.687: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.365128484s
Jul  4 08:21:20.692: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.369319494s
Jul  4 08:21:22.782: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba": Phase="Pending", Reason="", readiness=false. Elapsed: 10.460207292s
Jul  4 08:21:24.823: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba": Phase="Pending", Reason="", readiness=false. Elapsed: 12.50114421s
Jul  4 08:21:27.162: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba": Phase="Running", Reason="", readiness=true. Elapsed: 14.839755481s
Jul  4 08:21:29.165: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.842939255s
Jul  4 08:21:29.165: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:21:29.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7960" for this suite.

• [SLOW TEST:17.857 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":585,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:21:29.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  4 08:21:31.680: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  4 08:21:33.852: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:21:37.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:21:39.879: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:21:42.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:21:44.100: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:21:46.338: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:21:47.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:21:50.088: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:21:51.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:21:53.927: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:21:57.573: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:21:58.083: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:22:00.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:22:01.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  4 08:22:04.897: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:22:05.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-885" for this suite.
STEP: Destroying namespace "webhook-885-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:36.137 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":33,"skipped":616,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:22:05.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Jul  4 08:22:05.376: INFO: Waiting up to 5m0s for pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910" in namespace "containers-8710" to be "success or failure"
Jul  4 08:22:05.380: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 4.419711ms
Jul  4 08:22:08.381: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 3.005254486s
Jul  4 08:22:10.531: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 5.155417609s
Jul  4 08:22:12.572: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 7.196636758s
Jul  4 08:22:15.628: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 10.251878109s
Jul  4 08:22:17.631: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 12.255031471s
Jul  4 08:22:19.634: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 14.257801759s
Jul  4 08:22:22.196: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 16.820420033s
Jul  4 08:22:24.741: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 19.365657317s
Jul  4 08:22:27.700: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 22.324040619s
Jul  4 08:22:29.702: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 24.326629361s
Jul  4 08:22:31.706: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 26.330010812s
Jul  4 08:22:34.634: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 29.258282746s
Jul  4 08:22:36.638: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 31.261707111s
Jul  4 08:22:38.813: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 33.437644523s
Jul  4 08:22:40.816: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 35.440236865s
Jul  4 08:22:42.819: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 37.443133099s
Jul  4 08:22:44.823: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 39.446777767s
Jul  4 08:22:47.000: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 41.623741327s
Jul  4 08:22:50.245: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 44.869619614s
Jul  4 08:22:52.248: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 46.872379581s
Jul  4 08:22:54.502: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 49.126033232s
Jul  4 08:22:59.694: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 54.317950318s
Jul  4 08:23:01.696: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Succeeded", Reason="", readiness=false. Elapsed: 56.320639588s
STEP: Saw pod success
Jul  4 08:23:01.696: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910" satisfied condition "success or failure"
Jul  4 08:23:01.698: INFO: Trying to get logs from node jerma-worker pod client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910 container test-container: 
STEP: delete the pod
Jul  4 08:23:02.371: INFO: Waiting for pod client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910 to disappear
Jul  4 08:23:02.628: INFO: Pod client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:23:02.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8710" for this suite.

• [SLOW TEST:57.717 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":669,"failed":0}
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:23:03.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-b843fc32-e8fe-4b7f-83c8-aa010140181d
STEP: Creating a pod to test consume configMaps
Jul  4 08:23:03.221: INFO: Waiting up to 5m0s for pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf" in namespace "configmap-8300" to be "success or failure"
Jul  4 08:23:03.238: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.417507ms
Jul  4 08:23:06.196: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.974989637s
Jul  4 08:23:08.352: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.130977561s
Jul  4 08:23:10.418: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 7.197065338s
Jul  4 08:23:12.422: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 9.200578716s
Jul  4 08:23:14.437: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 11.21561948s
Jul  4 08:23:16.440: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 13.218976227s
Jul  4 08:23:19.508: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.286917154s
Jul  4 08:23:21.544: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.322585748s
Jul  4 08:23:24.038: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 20.816885856s
Jul  4 08:23:26.041: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 22.819537722s
Jul  4 08:23:28.044: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 24.822977224s
Jul  4 08:23:33.578: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 30.356649883s
Jul  4 08:23:35.610: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 32.388713268s
Jul  4 08:23:37.724: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 34.503001809s
Jul  4 08:23:39.727: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 36.506056172s
Jul  4 08:23:41.731: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Running", Reason="", readiness=true. Elapsed: 38.50963938s
Jul  4 08:23:44.264: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 41.042379108s
STEP: Saw pod success
Jul  4 08:23:44.264: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf" satisfied condition "success or failure"
Jul  4 08:23:44.432: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf container configmap-volume-test: 
STEP: delete the pod
Jul  4 08:23:44.928: INFO: Waiting for pod pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf to disappear
Jul  4 08:23:45.305: INFO: Pod pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:23:45.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8300" for this suite.

• [SLOW TEST:42.283 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":676,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:23:45.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  4 08:23:47.834: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  4 08:23:49.928: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:23:52.928: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:23:54.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:23:56.246: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:23:57.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:00.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:02.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:04.256: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:07.000: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:08.007: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:10.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:11.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:14.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:16.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:17.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:19.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:21.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:24.250: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:27.936: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:30.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:31.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:33.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:35.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:37.934: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:40.019: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:41.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:44.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:46.013: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:48.092: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:24:50.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  4 08:24:55.295: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:25:09.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9728" for this suite.
STEP: Destroying namespace "webhook-9728-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:87.918 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":36,"skipped":685,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:25:13.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jul  4 08:25:13.398: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-a 5f27f2d4-c6d7-41f4-9dbb-45c398124264 7510 0 2020-07-04 08:25:13 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  4 08:25:13.398: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-a 5f27f2d4-c6d7-41f4-9dbb-45c398124264 7510 0 2020-07-04 08:25:13 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jul  4 08:25:23.404: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-a 5f27f2d4-c6d7-41f4-9dbb-45c398124264 7560 0 2020-07-04 08:25:13 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jul  4 08:25:23.404: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-a 5f27f2d4-c6d7-41f4-9dbb-45c398124264 7560 0 2020-07-04 08:25:13 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jul  4 08:25:33.409: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-a 5f27f2d4-c6d7-41f4-9dbb-45c398124264 7590 0 2020-07-04 08:25:13 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul  4 08:25:33.409: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-a 5f27f2d4-c6d7-41f4-9dbb-45c398124264 7590 0 2020-07-04 08:25:13 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jul  4 08:25:43.739: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-a 5f27f2d4-c6d7-41f4-9dbb-45c398124264 7620 0 2020-07-04 08:25:13 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul  4 08:25:43.740: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-a 5f27f2d4-c6d7-41f4-9dbb-45c398124264 7620 0 2020-07-04 08:25:13 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jul  4 08:25:53.744: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-b 96aaacf1-88e9-4b5d-91cd-a32be58a2b9a 7645 0 2020-07-04 08:25:53 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  4 08:25:53.744: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-b 96aaacf1-88e9-4b5d-91cd-a32be58a2b9a 7645 0 2020-07-04 08:25:53 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jul  4 08:26:05.044: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-b 96aaacf1-88e9-4b5d-91cd-a32be58a2b9a 7671 0 2020-07-04 08:25:53 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  4 08:26:05.044: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-b 96aaacf1-88e9-4b5d-91cd-a32be58a2b9a 7671 0 2020-07-04 08:25:53 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:26:15.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6715" for this suite.

• [SLOW TEST:62.304 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":37,"skipped":695,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:26:15.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 08:26:16.556: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247" in namespace "security-context-test-2830" to be "success or failure"
Jul  4 08:26:17.370: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 813.956195ms
Jul  4 08:26:19.553: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 2.997317846s
Jul  4 08:26:21.660: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 5.104798395s
Jul  4 08:26:23.664: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 7.108150193s
Jul  4 08:26:26.724: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 10.168442492s
Jul  4 08:26:28.780: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 12.224004705s
Jul  4 08:26:30.783: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 14.227027409s
Jul  4 08:26:33.232: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 16.67667268s
Jul  4 08:26:35.235: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 18.679743817s
Jul  4 08:26:37.238: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 20.68230915s
Jul  4 08:26:40.014: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 23.45888356s
Jul  4 08:26:42.020: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 25.464331458s
Jul  4 08:26:44.296: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 27.740784439s
Jul  4 08:26:46.300: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 29.74400619s
Jul  4 08:26:48.355: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 31.799809623s
Jul  4 08:26:50.358: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Running", Reason="", readiness=true. Elapsed: 33.802492652s
Jul  4 08:26:52.379: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Running", Reason="", readiness=true. Elapsed: 35.823661787s
Jul  4 08:26:55.492: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.936387848s
Jul  4 08:26:55.492: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247" satisfied condition "success or failure"
Jul  4 08:26:56.164: INFO: Got logs for pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:26:56.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2830" for this suite.

• [SLOW TEST:40.732 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":726,"failed":0}
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:26:56.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-a2ab9ff6-076f-4080-8458-d50f09b6af4c
STEP: Creating a pod to test consume configMaps
Jul  4 08:26:56.397: INFO: Waiting up to 5m0s for pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749" in namespace "configmap-929" to be "success or failure"
Jul  4 08:26:56.404: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Pending", Reason="", readiness=false. Elapsed: 7.117907ms
Jul  4 08:26:59.067: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Pending", Reason="", readiness=false. Elapsed: 2.670242948s
Jul  4 08:27:01.070: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Pending", Reason="", readiness=false. Elapsed: 4.673018882s
Jul  4 08:27:03.202: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Pending", Reason="", readiness=false. Elapsed: 6.805282085s
Jul  4 08:27:05.260: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Pending", Reason="", readiness=false. Elapsed: 8.863229342s
Jul  4 08:27:07.602: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Pending", Reason="", readiness=false. Elapsed: 11.205251135s
Jul  4 08:27:09.745: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Running", Reason="", readiness=true. Elapsed: 13.347960007s
Jul  4 08:27:11.801: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Running", Reason="", readiness=true. Elapsed: 15.404690069s
Jul  4 08:27:14.196: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Running", Reason="", readiness=true. Elapsed: 17.799488292s
Jul  4 08:27:16.260: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.863682508s
STEP: Saw pod success
Jul  4 08:27:16.260: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749" satisfied condition "success or failure"
Jul  4 08:27:16.262: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749 container configmap-volume-test: 
STEP: delete the pod
Jul  4 08:27:16.317: INFO: Waiting for pod pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749 to disappear
Jul  4 08:27:17.014: INFO: Pod pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:27:17.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-929" for this suite.

• [SLOW TEST:20.752 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":726,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:27:17.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-f503f394-4652-4318-b3f8-0bbb6f871b35
STEP: Creating a pod to test consume configMaps
Jul  4 08:27:18.216: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d" in namespace "projected-4408" to be "success or failure"
Jul  4 08:27:18.496: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 280.354568ms
Jul  4 08:27:20.499: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.283034269s
Jul  4 08:27:22.584: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.368008926s
Jul  4 08:27:25.240: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.024647193s
Jul  4 08:27:27.328: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.112010757s
Jul  4 08:27:29.427: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.211613413s
Jul  4 08:27:31.896: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.680109905s
Jul  4 08:27:33.907: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.691268193s
Jul  4 08:27:36.158: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.942074662s
Jul  4 08:27:38.602: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.386609177s
Jul  4 08:27:40.724: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.50790591s
Jul  4 08:27:42.727: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 24.511453113s
Jul  4 08:27:44.731: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.515071893s
Jul  4 08:27:46.735: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.519078346s
Jul  4 08:27:50.148: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 31.932202993s
Jul  4 08:27:52.151: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 33.93527763s
Jul  4 08:27:55.450: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 37.234564136s
Jul  4 08:27:57.453: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 39.237563776s
Jul  4 08:27:59.590: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 41.374790061s
Jul  4 08:28:01.594: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 43.377963375s
Jul  4 08:28:03.597: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 45.38136824s
Jul  4 08:28:05.600: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 47.384326573s
Jul  4 08:28:08.057: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 49.841776915s
Jul  4 08:28:10.830: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 52.61480792s
Jul  4 08:28:15.064: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 56.848379591s
Jul  4 08:28:17.067: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 58.851676401s
Jul  4 08:28:19.196: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.980398309s
Jul  4 08:28:21.396: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.180135513s
Jul  4 08:28:23.399: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.183628752s
Jul  4 08:28:25.403: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.187097163s
Jul  4 08:28:28.239: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.023083128s
Jul  4 08:28:30.406: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.190218982s
Jul  4 08:28:32.409: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.193335392s
Jul  4 08:28:34.412: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.196490651s
Jul  4 08:28:36.415: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.199282025s
Jul  4 08:28:39.106: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.890059323s
Jul  4 08:28:41.503: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m23.28691103s
Jul  4 08:28:44.220: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.004466295s
Jul  4 08:28:46.282: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.065926971s
Jul  4 08:28:48.437: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.221507712s
Jul  4 08:28:50.508: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.292043325s
Jul  4 08:28:53.156: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Running", Reason="", readiness=true. Elapsed: 1m34.939928941s
Jul  4 08:28:55.158: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Running", Reason="", readiness=true. Elapsed: 1m36.942744676s
Jul  4 08:28:57.252: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Running", Reason="", readiness=true. Elapsed: 1m39.036401261s
Jul  4 08:28:59.255: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m41.039564341s
STEP: Saw pod success
Jul  4 08:28:59.255: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d" satisfied condition "success or failure"
Jul  4 08:28:59.258: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d container projected-configmap-volume-test: 
STEP: delete the pod
Jul  4 08:29:00.589: INFO: Waiting for pod pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d to disappear
Jul  4 08:29:00.867: INFO: Pod pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:29:00.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4408" for this suite.

• [SLOW TEST:104.137 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":747,"failed":0}
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:29:01.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:30:25.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-6063" for this suite.
STEP: Destroying namespace "nsdeletetest-6936" for this suite.
Jul  4 08:30:26.593: INFO: Namespace nsdeletetest-6936 was already deleted
STEP: Destroying namespace "nsdeletetest-3735" for this suite.

• [SLOW TEST:86.759 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":41,"skipped":749,"failed":0}
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:30:27.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:30:50.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4878" for this suite.
STEP: Destroying namespace "nsdeletetest-3385" for this suite.
Jul  4 08:30:50.432: INFO: Namespace nsdeletetest-3385 was already deleted
STEP: Destroying namespace "nsdeletetest-7685" for this suite.

• [SLOW TEST:22.520 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":42,"skipped":751,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:30:50.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:31:08.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9399" for this suite.

• [SLOW TEST:18.048 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":43,"skipped":775,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:31:08.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-9383b7d9-9e78-47a9-9490-9b0bf8760bb9 in namespace container-probe-6294
Jul  4 08:31:54.641: INFO: Started pod test-webserver-9383b7d9-9e78-47a9-9490-9b0bf8760bb9 in namespace container-probe-6294
STEP: checking the pod's current state and verifying that restartCount is present
Jul  4 08:31:54.643: INFO: Initial restart count of pod test-webserver-9383b7d9-9e78-47a9-9490-9b0bf8760bb9 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:35:56.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6294" for this suite.

• [SLOW TEST:289.351 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":789,"failed":0}
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:35:57.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jul  4 08:36:00.804: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:36:49.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7571" for this suite.

• [SLOW TEST:51.285 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":45,"skipped":789,"failed":0}
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:36:49.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:37:01.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-55" for this suite.

• [SLOW TEST:12.421 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":46,"skipped":789,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:37:01.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-d6f88050-3aff-4430-9294-2c41f9a89544
STEP: Creating a pod to test consume secrets
Jul  4 08:37:03.295: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56" in namespace "projected-6803" to be "success or failure"
Jul  4 08:37:03.304: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.449826ms
Jul  4 08:37:05.779: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.483576359s
Jul  4 08:37:07.812: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.516941545s
Jul  4 08:37:09.829: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.533808403s
Jul  4 08:37:11.834: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.538337777s
Jul  4 08:37:13.837: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 10.541919415s
Jul  4 08:37:15.885: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 12.590044518s
Jul  4 08:37:17.889: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 14.593587131s
Jul  4 08:37:20.248: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 16.952410761s
Jul  4 08:37:22.251: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 18.955479667s
Jul  4 08:37:24.254: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 20.958942625s
Jul  4 08:37:27.264: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 23.968784869s
Jul  4 08:37:29.267: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 25.971875842s
Jul  4 08:37:31.412: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 28.116815155s
Jul  4 08:37:33.416: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 30.121078888s
Jul  4 08:37:35.420: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 32.124219499s
Jul  4 08:37:37.471: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 34.176106072s
Jul  4 08:37:39.914: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 36.619086573s
Jul  4 08:37:41.917: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 38.622015455s
Jul  4 08:37:43.921: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 40.625565971s
Jul  4 08:37:45.988: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 42.693093681s
Jul  4 08:37:48.527: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 45.231588176s
Jul  4 08:37:50.529: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 47.233874718s
Jul  4 08:37:52.610: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 49.314621339s
Jul  4 08:37:55.193: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 51.89800892s
Jul  4 08:37:58.150: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 54.854452693s
Jul  4 08:38:00.763: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 57.467541366s
Jul  4 08:38:02.766: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 59.470684964s
Jul  4 08:38:04.770: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.474348843s
Jul  4 08:38:07.174: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.878519983s
Jul  4 08:38:09.176: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.880882065s
Jul  4 08:38:11.272: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.976183391s
Jul  4 08:38:13.366: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.071065669s
Jul  4 08:38:15.370: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.074636732s
Jul  4 08:38:17.492: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.196361153s
Jul  4 08:38:19.515: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.22013626s
Jul  4 08:38:21.932: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.636324133s
Jul  4 08:38:23.935: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.640096994s
Jul  4 08:38:25.947: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.651228379s
Jul  4 08:38:28.827: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m25.531792504s
Jul  4 08:38:30.830: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Running", Reason="", readiness=true. Elapsed: 1m27.53437261s
Jul  4 08:38:32.833: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m29.538150272s
STEP: Saw pod success
Jul  4 08:38:32.834: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56" satisfied condition "success or failure"
Jul  4 08:38:32.836: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56 container projected-secret-volume-test: 
STEP: delete the pod
Jul  4 08:38:32.879: INFO: Waiting for pod pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56 to disappear
Jul  4 08:38:32.910: INFO: Pod pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:38:32.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6803" for this suite.

• [SLOW TEST:91.389 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":831,"failed":0}
S
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:38:32.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jul  4 08:38:33.000: INFO: Waiting up to 5m0s for pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8" in namespace "downward-api-338" to be "success or failure"
Jul  4 08:38:33.004: INFO: Pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195701ms
Jul  4 08:38:35.097: INFO: Pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097346239s
Jul  4 08:38:37.107: INFO: Pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106615252s
Jul  4 08:38:39.154: INFO: Pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.153938176s
Jul  4 08:38:41.578: INFO: Pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8": Phase="Running", Reason="", readiness=true. Elapsed: 8.577683483s
Jul  4 08:38:43.581: INFO: Pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8": Phase="Running", Reason="", readiness=true. Elapsed: 10.580909897s
Jul  4 08:38:45.584: INFO: Pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8": Phase="Running", Reason="", readiness=true. Elapsed: 12.583621053s
Jul  4 08:38:47.592: INFO: Pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.591757001s
STEP: Saw pod success
Jul  4 08:38:47.592: INFO: Pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8" satisfied condition "success or failure"
Jul  4 08:38:47.654: INFO: Trying to get logs from node jerma-worker pod downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8 container dapi-container: 
STEP: delete the pod
Jul  4 08:38:49.386: INFO: Waiting for pod downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8 to disappear
Jul  4 08:38:49.467: INFO: Pod downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:38:49.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-338" for this suite.

• [SLOW TEST:16.541 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":832,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:38:49.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357
STEP: creating an pod
Jul  4 08:38:50.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-6187 -- logs-generator --log-lines-total 100 --run-duration 20s'
Jul  4 08:38:54.161: INFO: stderr: ""
Jul  4 08:38:54.161: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Jul  4 08:38:54.161: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Jul  4 08:38:54.161: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6187" to be "running and ready, or succeeded"
Jul  4 08:38:54.168: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101819ms
Jul  4 08:38:57.608: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.446821227s
Jul  4 08:38:59.611: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 5.44986102s
Jul  4 08:39:02.076: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.914824611s
Jul  4 08:39:04.080: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 9.91887938s
Jul  4 08:39:06.253: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 12.091515367s
Jul  4 08:39:09.273: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 15.111373829s
Jul  4 08:39:13.004: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 18.84244453s
Jul  4 08:39:16.044: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 21.882332599s
Jul  4 08:39:18.935: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 24.773432606s
Jul  4 08:39:21.535: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 27.373166029s
Jul  4 08:39:23.655: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 29.493879235s
Jul  4 08:39:25.659: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 31.497373804s
Jul  4 08:39:28.343: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 34.181803418s
Jul  4 08:39:31.399: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 37.23755318s
Jul  4 08:39:33.606: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 39.444922255s
Jul  4 08:39:35.708: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 41.546593035s
Jul  4 08:39:38.601: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 44.439113428s
Jul  4 08:39:40.603: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 46.441741016s
Jul  4 08:39:42.644: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 48.482163513s
Jul  4 08:39:45.194: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 51.032376776s
Jul  4 08:39:47.665: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 53.503745354s
Jul  4 08:39:50.896: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 56.735018541s
Jul  4 08:39:52.900: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 58.738170258s
Jul  4 08:39:55.000: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.83849196s
Jul  4 08:39:58.192: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.030436921s
Jul  4 08:40:00.195: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.033989016s
Jul  4 08:40:02.199: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.037271688s
Jul  4 08:40:04.203: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.04108096s
Jul  4 08:40:06.551: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.389398494s
Jul  4 08:40:09.526: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.364650676s
Jul  4 08:40:11.530: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1m17.368967423s
Jul  4 08:40:13.655: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.493610387s
Jul  4 08:40:15.658: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 1m21.496281896s
Jul  4 08:40:15.658: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Jul  4 08:40:15.658: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Jul  4 08:40:15.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6187'
Jul  4 08:40:15.747: INFO: stderr: ""
Jul  4 08:40:15.747: INFO: stdout: "I0704 08:40:14.993936       1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/45l 492\nI0704 08:40:15.194046       1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/jwj2 332\nI0704 08:40:15.394098       1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/ksb 283\nI0704 08:40:15.594087       1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/xc9l 249\n"
Jul  4 08:40:17.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6187'
Jul  4 08:40:17.853: INFO: stderr: ""
Jul  4 08:40:17.853: INFO: stdout: "I0704 08:40:14.993936       1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/45l 492\nI0704 08:40:15.194046       1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/jwj2 332\nI0704 08:40:15.394098       1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/ksb 283\nI0704 08:40:15.594087       1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/xc9l 249\nI0704 08:40:15.794075       1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/rrkd 552\nI0704 08:40:15.994105       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/6vjb 203\nI0704 08:40:16.194080       1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/p9z9 290\nI0704 08:40:16.394078       1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/29qn 585\nI0704 08:40:16.594097       1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/dbqs 288\nI0704 08:40:16.794092       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/btj 387\nI0704 08:40:16.994063       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/frs 479\nI0704 08:40:17.194105       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/xn2k 493\nI0704 08:40:17.394082       1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/4df 446\nI0704 08:40:17.594097       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/h8x 294\nI0704 08:40:17.794070       1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/9cq 350\n"
STEP: limiting log lines
Jul  4 08:40:17.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6187 --tail=1'
Jul  4 08:40:17.947: INFO: stderr: ""
Jul  4 08:40:17.947: INFO: stdout: "I0704 08:40:17.794070       1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/9cq 350\n"
Jul  4 08:40:17.947: INFO: got output "I0704 08:40:17.794070       1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/9cq 350\n"
STEP: limiting log bytes
Jul  4 08:40:17.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6187 --limit-bytes=1'
Jul  4 08:40:18.031: INFO: stderr: ""
Jul  4 08:40:18.031: INFO: stdout: "I"
Jul  4 08:40:18.031: INFO: got output "I"
STEP: exposing timestamps
Jul  4 08:40:18.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6187 --tail=1 --timestamps'
Jul  4 08:40:18.127: INFO: stderr: ""
Jul  4 08:40:18.127: INFO: stdout: "2020-07-04T08:40:17.994184721Z I0704 08:40:17.994067       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/696b 474\n"
Jul  4 08:40:18.127: INFO: got output "2020-07-04T08:40:17.994184721Z I0704 08:40:17.994067       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/696b 474\n"
STEP: restricting to a time range
Jul  4 08:40:20.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6187 --since=1s'
Jul  4 08:40:20.739: INFO: stderr: ""
Jul  4 08:40:20.739: INFO: stdout: "I0704 08:40:19.794084       1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/lzg 248\nI0704 08:40:19.994094       1 logs_generator.go:76] 25 GET /api/v1/namespaces/default/pods/87g 318\nI0704 08:40:20.194121       1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/7c7q 287\nI0704 08:40:20.394194       1 logs_generator.go:76] 27 POST /api/v1/namespaces/kube-system/pods/lpp 388\nI0704 08:40:20.594068       1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/8t6p 247\n"
Jul  4 08:40:20.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6187 --since=24h'
Jul  4 08:40:20.833: INFO: stderr: ""
Jul  4 08:40:20.833: INFO: stdout: "I0704 08:40:14.993936       1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/45l 492\nI0704 08:40:15.194046       1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/jwj2 332\nI0704 08:40:15.394098       1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/ksb 283\nI0704 08:40:15.594087       1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/xc9l 249\nI0704 08:40:15.794075       1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/rrkd 552\nI0704 08:40:15.994105       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/6vjb 203\nI0704 08:40:16.194080       1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/p9z9 290\nI0704 08:40:16.394078       1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/29qn 585\nI0704 08:40:16.594097       1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/dbqs 288\nI0704 08:40:16.794092       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/btj 387\nI0704 08:40:16.994063       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/frs 479\nI0704 08:40:17.194105       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/xn2k 493\nI0704 08:40:17.394082       1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/4df 446\nI0704 08:40:17.594097       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/h8x 294\nI0704 08:40:17.794070       1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/9cq 350\nI0704 08:40:17.994067       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/696b 474\nI0704 08:40:18.194102       1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/rhw 587\nI0704 08:40:18.394103       1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/sgf 289\nI0704 08:40:18.594093       1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/dkw 221\nI0704 08:40:18.794073       1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/wwg 444\nI0704 08:40:18.994098       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/7jmk 201\nI0704 08:40:19.194106       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/q9w 375\nI0704 08:40:19.394067       1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/89jc 572\nI0704 08:40:19.594063       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/4ql 476\nI0704 08:40:19.794084       1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/lzg 248\nI0704 08:40:19.994094       1 logs_generator.go:76] 25 GET /api/v1/namespaces/default/pods/87g 318\nI0704 08:40:20.194121       1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/7c7q 287\nI0704 08:40:20.394194       1 logs_generator.go:76] 27 POST /api/v1/namespaces/kube-system/pods/lpp 388\nI0704 08:40:20.594068       1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/8t6p 247\nI0704 08:40:20.794085       1 logs_generator.go:76] 29 POST /api/v1/namespaces/default/pods/9vs 333\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363
Jul  4 08:40:20.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6187'
Jul  4 08:41:18.777: INFO: stderr: ""
Jul  4 08:41:18.777: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:41:18.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6187" for this suite.

• [SLOW TEST:151.605 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":49,"skipped":853,"failed":0}
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:41:21.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  4 08:41:24.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5" in namespace "projected-3009" to be "success or failure"
Jul  4 08:41:24.726: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 81.968555ms
Jul  4 08:41:28.076: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.431817445s
Jul  4 08:41:30.187: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.543333023s
Jul  4 08:41:32.256: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.612427402s
Jul  4 08:41:35.498: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.854619011s
Jul  4 08:41:37.500: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.856666571s
Jul  4 08:41:39.504: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.860074882s
Jul  4 08:41:42.742: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.098634932s
Jul  4 08:41:44.746: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.101784579s
Jul  4 08:41:46.749: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.10547057s
Jul  4 08:41:49.446: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.802679392s
Jul  4 08:41:51.728: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Running", Reason="", readiness=true. Elapsed: 27.084747748s
Jul  4 08:41:53.764: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Running", Reason="", readiness=true. Elapsed: 29.12029745s
Jul  4 08:41:55.767: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Running", Reason="", readiness=true. Elapsed: 31.123463917s
Jul  4 08:41:57.789: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Running", Reason="", readiness=true. Elapsed: 33.145115481s
Jul  4 08:41:59.793: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Running", Reason="", readiness=true. Elapsed: 35.149488226s
Jul  4 08:42:02.476: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Running", Reason="", readiness=true. Elapsed: 37.832536033s
Jul  4 08:42:05.550: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.906705318s
STEP: Saw pod success
Jul  4 08:42:05.551: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5" satisfied condition "success or failure"
Jul  4 08:42:05.554: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5 container client-container: 
STEP: delete the pod
Jul  4 08:42:06.698: INFO: Waiting for pod downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5 to disappear
Jul  4 08:42:06.770: INFO: Pod downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:42:06.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3009" for this suite.

• [SLOW TEST:45.696 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":853,"failed":0}
SSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:42:06.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jul  4 08:42:08.513: INFO: Waiting up to 5m0s for pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc" in namespace "downward-api-2143" to be "success or failure"
Jul  4 08:42:08.515: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118556ms
Jul  4 08:42:10.518: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00460024s
Jul  4 08:42:13.057: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.544002657s
Jul  4 08:42:15.148: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.635082792s
Jul  4 08:42:17.694: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.180575541s
Jul  4 08:42:20.370: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.857049569s
Jul  4 08:42:22.532: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.01876291s
Jul  4 08:42:24.806: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.292846978s
Jul  4 08:42:26.809: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.296189418s
Jul  4 08:42:28.927: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.413564131s
Jul  4 08:42:31.139: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.626273648s
Jul  4 08:42:33.143: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 24.629546753s
Jul  4 08:42:35.146: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 26.632733403s
Jul  4 08:42:37.241: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 28.72790257s
Jul  4 08:42:39.376: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.863187422s
Jul  4 08:42:41.380: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 32.866653544s
Jul  4 08:42:43.610: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 35.096592456s
Jul  4 08:42:45.613: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.099815463s
STEP: Saw pod success
Jul  4 08:42:45.613: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc" satisfied condition "success or failure"
Jul  4 08:42:45.615: INFO: Trying to get logs from node jerma-worker2 pod downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc container dapi-container: 
STEP: delete the pod
Jul  4 08:42:45.643: INFO: Waiting for pod downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc to disappear
Jul  4 08:42:45.666: INFO: Pod downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:42:45.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2143" for this suite.

• [SLOW TEST:39.010 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":857,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:42:45.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:43:06.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5074" for this suite.

• [SLOW TEST:21.163 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":52,"skipped":867,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:43:06.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Jul  4 08:43:07.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4117'
Jul  4 08:43:07.517: INFO: stderr: ""
Jul  4 08:43:07.517: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  4 08:43:07.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4117'
Jul  4 08:43:07.618: INFO: stderr: ""
Jul  4 08:43:07.618: INFO: stdout: "update-demo-nautilus-pr7zz update-demo-nautilus-wm2rs "
Jul  4 08:43:07.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pr7zz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4117'
Jul  4 08:43:07.709: INFO: stderr: ""
Jul  4 08:43:07.709: INFO: stdout: ""
Jul  4 08:43:07.709: INFO: update-demo-nautilus-pr7zz is created but not running
Jul  4 08:43:12.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4117'
Jul  4 08:43:12.797: INFO: stderr: ""
Jul  4 08:43:12.797: INFO: stdout: "update-demo-nautilus-pr7zz update-demo-nautilus-wm2rs "
Jul  4 08:43:12.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pr7zz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4117'
Jul  4 08:43:12.891: INFO: stderr: ""
Jul  4 08:43:12.891: INFO: stdout: ""
Jul  4 08:43:12.891: INFO: update-demo-nautilus-pr7zz is created but not running
Jul  4 08:43:17.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4117'
Jul  4 08:43:18.107: INFO: stderr: ""
Jul  4 08:43:18.107: INFO: stdout: "update-demo-nautilus-pr7zz update-demo-nautilus-wm2rs "
Jul  4 08:43:18.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pr7zz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4117'
Jul  4 08:43:18.330: INFO: stderr: ""
Jul  4 08:43:18.330: INFO: stdout: ""
Jul  4 08:43:18.330: INFO: update-demo-nautilus-pr7zz is created but not running
Jul  4 08:43:23.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4117'
Jul  4 08:43:23.499: INFO: stderr: ""
Jul  4 08:43:23.499: INFO: stdout: "update-demo-nautilus-pr7zz update-demo-nautilus-wm2rs "
Jul  4 08:43:23.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pr7zz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4117'
Jul  4 08:43:23.584: INFO: stderr: ""
Jul  4 08:43:23.584: INFO: stdout: ""
Jul  4 08:43:23.584: INFO: update-demo-nautilus-pr7zz is created but not running
Jul  4 08:43:28.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4117'
Jul  4 08:43:28.684: INFO: stderr: ""
Jul  4 08:43:28.684: INFO: stdout: "update-demo-nautilus-pr7zz update-demo-nautilus-wm2rs "
Jul  4 08:43:28.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pr7zz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4117'
Jul  4 08:43:28.771: INFO: stderr: ""
Jul  4 08:43:28.771: INFO: stdout: "true"
Jul  4 08:43:28.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pr7zz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4117'
Jul  4 08:43:28.857: INFO: stderr: ""
Jul  4 08:43:28.857: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  4 08:43:28.857: INFO: validating pod update-demo-nautilus-pr7zz
Jul  4 08:43:28.860: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  4 08:43:28.860: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  4 08:43:28.860: INFO: update-demo-nautilus-pr7zz is verified up and running
Jul  4 08:43:28.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wm2rs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4117'
Jul  4 08:43:28.956: INFO: stderr: ""
Jul  4 08:43:28.956: INFO: stdout: "true"
Jul  4 08:43:28.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wm2rs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4117'
Jul  4 08:43:29.037: INFO: stderr: ""
Jul  4 08:43:29.037: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  4 08:43:29.037: INFO: validating pod update-demo-nautilus-wm2rs
Jul  4 08:43:29.040: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  4 08:43:29.040: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  4 08:43:29.040: INFO: update-demo-nautilus-wm2rs is verified up and running
STEP: using delete to clean up resources
Jul  4 08:43:29.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4117'
Jul  4 08:43:29.134: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  4 08:43:29.134: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul  4 08:43:29.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4117'
Jul  4 08:43:29.217: INFO: stderr: "No resources found in kubectl-4117 namespace.\n"
Jul  4 08:43:29.217: INFO: stdout: ""
Jul  4 08:43:29.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4117 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  4 08:43:29.308: INFO: stderr: ""
Jul  4 08:43:29.308: INFO: stdout: "update-demo-nautilus-pr7zz\nupdate-demo-nautilus-wm2rs\n"
Jul  4 08:43:29.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4117'
Jul  4 08:43:29.904: INFO: stderr: "No resources found in kubectl-4117 namespace.\n"
Jul  4 08:43:29.904: INFO: stdout: ""
Jul  4 08:43:29.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4117 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  4 08:43:29.999: INFO: stderr: ""
Jul  4 08:43:29.999: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:43:29.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4117" for this suite.

• [SLOW TEST:23.053 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":53,"skipped":879,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:43:30.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul  4 08:43:30.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7244'
Jul  4 08:43:30.233: INFO: stderr: ""
Jul  4 08:43:30.233: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Jul  4 08:43:35.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7244 -o json'
Jul  4 08:43:35.397: INFO: stderr: ""
Jul  4 08:43:35.397: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-07-04T08:43:30Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-7244\",\n        \"resourceVersion\": \"10570\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-7244/pods/e2e-test-httpd-pod\",\n        \"uid\": \"5589bb13-f301-4129-8fab-b0eedc1c3428\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-xhqsk\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-xhqsk\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-xhqsk\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-04T08:43:30Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-04T08:43:33Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-04T08:43:33Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-04T08:43:30Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://090d8a9d2ac2f59fbacf2c3c314029db44ce145c6549fbdd9ce7d9c33c13653c\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-07-04T08:43:32Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.17.0.10\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.25\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.1.25\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-07-04T08:43:30Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jul  4 08:43:35.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7244'
Jul  4 08:43:35.616: INFO: stderr: ""
Jul  4 08:43:35.616: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795
Jul  4 08:43:35.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7244'
Jul  4 08:43:46.215: INFO: stderr: ""
Jul  4 08:43:46.215: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:43:46.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7244" for this suite.

• [SLOW TEST:16.230 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":54,"skipped":908,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:43:46.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-216/configmap-test-fef20a81-a3b2-41d5-a54f-db810be0c333
STEP: Creating a pod to test consume configMaps
Jul  4 08:43:46.327: INFO: Waiting up to 5m0s for pod "pod-configmaps-cecd05d8-d1e7-4a97-9d7c-7482024b394f" in namespace "configmap-216" to be "success or failure"
Jul  4 08:43:46.359: INFO: Pod "pod-configmaps-cecd05d8-d1e7-4a97-9d7c-7482024b394f": Phase="Pending", Reason="", readiness=false. Elapsed: 31.698021ms
Jul  4 08:43:48.363: INFO: Pod "pod-configmaps-cecd05d8-d1e7-4a97-9d7c-7482024b394f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035862213s
Jul  4 08:43:50.367: INFO: Pod "pod-configmaps-cecd05d8-d1e7-4a97-9d7c-7482024b394f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04015265s
STEP: Saw pod success
Jul  4 08:43:50.368: INFO: Pod "pod-configmaps-cecd05d8-d1e7-4a97-9d7c-7482024b394f" satisfied condition "success or failure"
Jul  4 08:43:50.370: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-cecd05d8-d1e7-4a97-9d7c-7482024b394f container env-test: 
STEP: delete the pod
Jul  4 08:43:50.404: INFO: Waiting for pod pod-configmaps-cecd05d8-d1e7-4a97-9d7c-7482024b394f to disappear
Jul  4 08:43:50.422: INFO: Pod pod-configmaps-cecd05d8-d1e7-4a97-9d7c-7482024b394f no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:43:50.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-216" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":924,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:43:50.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jul  4 08:43:50.526: INFO: Waiting up to 5m0s for pod "downward-api-e3165bce-daf5-494f-81ba-70fdc7417895" in namespace "downward-api-145" to be "success or failure"
Jul  4 08:43:50.530: INFO: Pod "downward-api-e3165bce-daf5-494f-81ba-70fdc7417895": Phase="Pending", Reason="", readiness=false. Elapsed: 3.802211ms
Jul  4 08:43:52.545: INFO: Pod "downward-api-e3165bce-daf5-494f-81ba-70fdc7417895": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019251717s
Jul  4 08:43:54.550: INFO: Pod "downward-api-e3165bce-daf5-494f-81ba-70fdc7417895": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023534119s
STEP: Saw pod success
Jul  4 08:43:54.550: INFO: Pod "downward-api-e3165bce-daf5-494f-81ba-70fdc7417895" satisfied condition "success or failure"
Jul  4 08:43:54.552: INFO: Trying to get logs from node jerma-worker pod downward-api-e3165bce-daf5-494f-81ba-70fdc7417895 container dapi-container: 
STEP: delete the pod
Jul  4 08:43:54.624: INFO: Waiting for pod downward-api-e3165bce-daf5-494f-81ba-70fdc7417895 to disappear
Jul  4 08:43:54.627: INFO: Pod downward-api-e3165bce-daf5-494f-81ba-70fdc7417895 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:43:54.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-145" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":967,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:43:54.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul  4 08:44:05.080: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  4 08:44:05.110: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  4 08:44:07.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  4 08:44:07.115: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  4 08:44:09.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  4 08:44:09.115: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:44:09.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5874" for this suite.

• [SLOW TEST:14.489 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":979,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:44:09.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3610.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3610.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  4 08:44:29.343: INFO: DNS probes using dns-3610/dns-test-4c9bf7c1-e7fa-4d20-ad60-2e1b45b7d16e succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:44:29.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3610" for this suite.

• [SLOW TEST:20.315 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":58,"skipped":997,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:44:29.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:44:29.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2794" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":59,"skipped":1007,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:44:29.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:45:30.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8039" for this suite.

• [SLOW TEST:60.257 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":1019,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:45:30.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-da022cae-8079-4ced-8164-8e569c5f3e7d
STEP: Creating a pod to test consume secrets
Jul  4 08:45:30.144: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3ecfa5eb-010e-4629-b294-df529a8bd648" in namespace "projected-1483" to be "success or failure"
Jul  4 08:45:30.152: INFO: Pod "pod-projected-secrets-3ecfa5eb-010e-4629-b294-df529a8bd648": Phase="Pending", Reason="", readiness=false. Elapsed: 8.38494ms
Jul  4 08:45:32.156: INFO: Pod "pod-projected-secrets-3ecfa5eb-010e-4629-b294-df529a8bd648": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012119569s
Jul  4 08:45:34.160: INFO: Pod "pod-projected-secrets-3ecfa5eb-010e-4629-b294-df529a8bd648": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016528571s
STEP: Saw pod success
Jul  4 08:45:34.160: INFO: Pod "pod-projected-secrets-3ecfa5eb-010e-4629-b294-df529a8bd648" satisfied condition "success or failure"
Jul  4 08:45:34.163: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-3ecfa5eb-010e-4629-b294-df529a8bd648 container projected-secret-volume-test: 
STEP: delete the pod
Jul  4 08:45:34.209: INFO: Waiting for pod pod-projected-secrets-3ecfa5eb-010e-4629-b294-df529a8bd648 to disappear
Jul  4 08:45:34.218: INFO: Pod pod-projected-secrets-3ecfa5eb-010e-4629-b294-df529a8bd648 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:45:34.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1483" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":1025,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:45:34.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Jul  4 08:45:34.288: INFO: >>> kubeConfig: /root/.kube/config
Jul  4 08:45:37.219: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:45:46.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7750" for this suite.

• [SLOW TEST:12.537 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":62,"skipped":1029,"failed":0}
S
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:45:46.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Jul  4 08:45:51.357: INFO: Successfully updated pod "adopt-release-qsn5v"
STEP: Checking that the Job readopts the Pod
Jul  4 08:45:51.357: INFO: Waiting up to 15m0s for pod "adopt-release-qsn5v" in namespace "job-8919" to be "adopted"
Jul  4 08:45:51.379: INFO: Pod "adopt-release-qsn5v": Phase="Running", Reason="", readiness=true. Elapsed: 21.707603ms
Jul  4 08:45:53.383: INFO: Pod "adopt-release-qsn5v": Phase="Running", Reason="", readiness=true. Elapsed: 2.025383577s
Jul  4 08:45:53.383: INFO: Pod "adopt-release-qsn5v" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Jul  4 08:45:53.893: INFO: Successfully updated pod "adopt-release-qsn5v"
STEP: Checking that the Job releases the Pod
Jul  4 08:45:53.893: INFO: Waiting up to 15m0s for pod "adopt-release-qsn5v" in namespace "job-8919" to be "released"
Jul  4 08:45:53.918: INFO: Pod "adopt-release-qsn5v": Phase="Running", Reason="", readiness=true. Elapsed: 24.687297ms
Jul  4 08:45:56.218: INFO: Pod "adopt-release-qsn5v": Phase="Running", Reason="", readiness=true. Elapsed: 2.324254629s
Jul  4 08:45:56.218: INFO: Pod "adopt-release-qsn5v" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:45:56.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8919" for this suite.

• [SLOW TEST:9.600 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":63,"skipped":1030,"failed":0}
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:45:56.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jul  4 08:45:56.633: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  4 08:45:56.643: INFO: Waiting for terminating namespaces to be deleted...
Jul  4 08:45:56.645: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Jul  4 08:45:56.651: INFO: kindnet-gnxwn from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  4 08:45:56.651: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  4 08:45:56.651: INFO: kube-proxy-8sp85 from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  4 08:45:56.651: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  4 08:45:56.651: INFO: adopt-release-qsn5v from job-8919 started at 2020-07-04 08:45:46 +0000 UTC (1 container statuses recorded)
Jul  4 08:45:56.651: INFO: 	Container c ready: true, restart count 0
Jul  4 08:45:56.651: INFO: adopt-release-wjgwh from job-8919 started at 2020-07-04 08:45:46 +0000 UTC (1 container statuses recorded)
Jul  4 08:45:56.651: INFO: 	Container c ready: true, restart count 0
Jul  4 08:45:56.651: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Jul  4 08:45:56.672: INFO: kube-proxy-b2ncl from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  4 08:45:56.672: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  4 08:45:56.672: INFO: kindnet-qg8qr from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  4 08:45:56.672: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  4 08:45:56.672: INFO: adopt-release-b7b7n from job-8919 started at 2020-07-04 08:45:54 +0000 UTC (1 container statuses recorded)
Jul  4 08:45:56.672: INFO: 	Container c ready: false, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.161e8046189e3ccb], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:45:57.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1099" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":64,"skipped":1037,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:45:57.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  4 08:45:58.402: INFO: Waiting up to 5m0s for pod "pod-63bb16cd-e68c-4ab8-850f-c19245369d70" in namespace "emptydir-7959" to be "success or failure"
Jul  4 08:45:58.554: INFO: Pod "pod-63bb16cd-e68c-4ab8-850f-c19245369d70": Phase="Pending", Reason="", readiness=false. Elapsed: 151.3229ms
Jul  4 08:46:00.558: INFO: Pod "pod-63bb16cd-e68c-4ab8-850f-c19245369d70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155136992s
Jul  4 08:46:02.674: INFO: Pod "pod-63bb16cd-e68c-4ab8-850f-c19245369d70": Phase="Running", Reason="", readiness=true. Elapsed: 4.271009577s
Jul  4 08:46:04.677: INFO: Pod "pod-63bb16cd-e68c-4ab8-850f-c19245369d70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.274596362s
STEP: Saw pod success
Jul  4 08:46:04.677: INFO: Pod "pod-63bb16cd-e68c-4ab8-850f-c19245369d70" satisfied condition "success or failure"
Jul  4 08:46:04.680: INFO: Trying to get logs from node jerma-worker2 pod pod-63bb16cd-e68c-4ab8-850f-c19245369d70 container test-container: 
STEP: delete the pod
Jul  4 08:46:04.836: INFO: Waiting for pod pod-63bb16cd-e68c-4ab8-850f-c19245369d70 to disappear
Jul  4 08:46:04.875: INFO: Pod pod-63bb16cd-e68c-4ab8-850f-c19245369d70 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:46:04.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7959" for this suite.

• [SLOW TEST:7.171 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1047,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:46:04.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  4 08:46:05.050: INFO: Waiting up to 5m0s for pod "downwardapi-volume-34f7800b-6974-4eb7-b956-744b475b3f69" in namespace "downward-api-1911" to be "success or failure"
Jul  4 08:46:05.135: INFO: Pod "downwardapi-volume-34f7800b-6974-4eb7-b956-744b475b3f69": Phase="Pending", Reason="", readiness=false. Elapsed: 84.716345ms
Jul  4 08:46:07.139: INFO: Pod "downwardapi-volume-34f7800b-6974-4eb7-b956-744b475b3f69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088936426s
Jul  4 08:46:09.142: INFO: Pod "downwardapi-volume-34f7800b-6974-4eb7-b956-744b475b3f69": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092082124s
Jul  4 08:46:11.145: INFO: Pod "downwardapi-volume-34f7800b-6974-4eb7-b956-744b475b3f69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.095476781s
STEP: Saw pod success
Jul  4 08:46:11.145: INFO: Pod "downwardapi-volume-34f7800b-6974-4eb7-b956-744b475b3f69" satisfied condition "success or failure"
Jul  4 08:46:11.148: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-34f7800b-6974-4eb7-b956-744b475b3f69 container client-container: 
STEP: delete the pod
Jul  4 08:46:11.167: INFO: Waiting for pod downwardapi-volume-34f7800b-6974-4eb7-b956-744b475b3f69 to disappear
Jul  4 08:46:11.173: INFO: Pod downwardapi-volume-34f7800b-6974-4eb7-b956-744b475b3f69 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:46:11.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1911" for this suite.

• [SLOW TEST:6.296 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1075,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:46:11.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-d7bddd00-689c-4632-9bff-9b5841320d90
STEP: Creating a pod to test consume configMaps
Jul  4 08:46:11.751: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1d91c59d-73bb-4a4c-8232-0c88036607b3" in namespace "projected-3420" to be "success or failure"
Jul  4 08:46:11.871: INFO: Pod "pod-projected-configmaps-1d91c59d-73bb-4a4c-8232-0c88036607b3": Phase="Pending", Reason="", readiness=false. Elapsed: 120.008986ms
Jul  4 08:46:13.875: INFO: Pod "pod-projected-configmaps-1d91c59d-73bb-4a4c-8232-0c88036607b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12416892s
Jul  4 08:46:15.879: INFO: Pod "pod-projected-configmaps-1d91c59d-73bb-4a4c-8232-0c88036607b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.127823244s
STEP: Saw pod success
Jul  4 08:46:15.879: INFO: Pod "pod-projected-configmaps-1d91c59d-73bb-4a4c-8232-0c88036607b3" satisfied condition "success or failure"
Jul  4 08:46:15.881: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-1d91c59d-73bb-4a4c-8232-0c88036607b3 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  4 08:46:15.898: INFO: Waiting for pod pod-projected-configmaps-1d91c59d-73bb-4a4c-8232-0c88036607b3 to disappear
Jul  4 08:46:15.924: INFO: Pod pod-projected-configmaps-1d91c59d-73bb-4a4c-8232-0c88036607b3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:46:15.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3420" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1093,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:46:15.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  4 08:46:16.004: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59810a76-e4e2-4257-ac78-bf23f11e53d4" in namespace "projected-4767" to be "success or failure"
Jul  4 08:46:16.010: INFO: Pod "downwardapi-volume-59810a76-e4e2-4257-ac78-bf23f11e53d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.641475ms
Jul  4 08:46:18.013: INFO: Pod "downwardapi-volume-59810a76-e4e2-4257-ac78-bf23f11e53d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009597924s
Jul  4 08:46:20.018: INFO: Pod "downwardapi-volume-59810a76-e4e2-4257-ac78-bf23f11e53d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013860868s
STEP: Saw pod success
Jul  4 08:46:20.018: INFO: Pod "downwardapi-volume-59810a76-e4e2-4257-ac78-bf23f11e53d4" satisfied condition "success or failure"
Jul  4 08:46:20.021: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-59810a76-e4e2-4257-ac78-bf23f11e53d4 container client-container: 
STEP: delete the pod
Jul  4 08:46:20.041: INFO: Waiting for pod downwardapi-volume-59810a76-e4e2-4257-ac78-bf23f11e53d4 to disappear
Jul  4 08:46:20.069: INFO: Pod downwardapi-volume-59810a76-e4e2-4257-ac78-bf23f11e53d4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:46:20.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4767" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1105,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:46:20.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Jul  4 08:46:20.148: INFO: namespace kubectl-8340
Jul  4 08:46:20.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8340'
Jul  4 08:46:20.454: INFO: stderr: ""
Jul  4 08:46:20.454: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jul  4 08:46:21.459: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:46:21.459: INFO: Found 0 / 1
Jul  4 08:46:22.626: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:46:22.626: INFO: Found 0 / 1
Jul  4 08:46:23.458: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:46:23.458: INFO: Found 0 / 1
Jul  4 08:46:24.458: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:46:24.458: INFO: Found 1 / 1
Jul  4 08:46:24.458: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul  4 08:46:24.462: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:46:24.462: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul  4 08:46:24.462: INFO: wait on agnhost-master startup in kubectl-8340 
Jul  4 08:46:24.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-829mc agnhost-master --namespace=kubectl-8340'
Jul  4 08:46:24.573: INFO: stderr: ""
Jul  4 08:46:24.573: INFO: stdout: "Paused\n"
STEP: exposing RC
Jul  4 08:46:24.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8340'
Jul  4 08:46:24.866: INFO: stderr: ""
Jul  4 08:46:24.866: INFO: stdout: "service/rm2 exposed\n"
Jul  4 08:46:25.064: INFO: Service rm2 in namespace kubectl-8340 found.
STEP: exposing service
Jul  4 08:46:27.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8340'
Jul  4 08:46:27.246: INFO: stderr: ""
Jul  4 08:46:27.246: INFO: stdout: "service/rm3 exposed\n"
Jul  4 08:46:27.320: INFO: Service rm3 in namespace kubectl-8340 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:46:29.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8340" for this suite.

• [SLOW TEST:9.259 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":69,"skipped":1106,"failed":0}
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:46:29.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jul  4 08:46:29.426: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  4 08:46:29.434: INFO: Waiting for terminating namespaces to be deleted...
Jul  4 08:46:29.436: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Jul  4 08:46:29.440: INFO: kindnet-gnxwn from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  4 08:46:29.440: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  4 08:46:29.440: INFO: kube-proxy-8sp85 from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  4 08:46:29.440: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  4 08:46:29.440: INFO: adopt-release-qsn5v from job-8919 started at 2020-07-04 08:45:46 +0000 UTC (1 container statuses recorded)
Jul  4 08:46:29.440: INFO: 	Container c ready: true, restart count 0
Jul  4 08:46:29.440: INFO: adopt-release-wjgwh from job-8919 started at 2020-07-04 08:45:46 +0000 UTC (1 container statuses recorded)
Jul  4 08:46:29.440: INFO: 	Container c ready: true, restart count 0
Jul  4 08:46:29.440: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Jul  4 08:46:29.445: INFO: kube-proxy-b2ncl from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  4 08:46:29.445: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  4 08:46:29.445: INFO: kindnet-qg8qr from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  4 08:46:29.445: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  4 08:46:29.445: INFO: adopt-release-b7b7n from job-8919 started at 2020-07-04 08:45:54 +0000 UTC (1 container statuses recorded)
Jul  4 08:46:29.445: INFO: 	Container c ready: true, restart count 0
Jul  4 08:46:29.445: INFO: agnhost-master-829mc from kubectl-8340 started at 2020-07-04 08:46:20 +0000 UTC (1 container statuses recorded)
Jul  4 08:46:29.445: INFO: 	Container agnhost-master ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-66676659-1621-4a3e-b04c-c393717e4d57 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-66676659-1621-4a3e-b04c-c393717e4d57 off the node jerma-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-66676659-1621-4a3e-b04c-c393717e4d57
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:46:51.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5935" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:22.673 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":70,"skipped":1106,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:46:52.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-af21f5a5-8473-4898-b08a-d22f8ade9416
STEP: Creating secret with name s-test-opt-upd-d4db8c67-1ac7-4bd8-84fc-68637ff2e948
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-af21f5a5-8473-4898-b08a-d22f8ade9416
STEP: Updating secret s-test-opt-upd-d4db8c67-1ac7-4bd8-84fc-68637ff2e948
STEP: Creating secret with name s-test-opt-create-f8057e8a-7d3b-459d-9aa8-fd8a5b41cf09
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:48:09.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5780" for this suite.

• [SLOW TEST:77.157 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1124,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:48:09.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  4 08:48:09.285: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1088aedf-98a1-4533-845d-9061b4ed5b19" in namespace "downward-api-8534" to be "success or failure"
Jul  4 08:48:09.289: INFO: Pod "downwardapi-volume-1088aedf-98a1-4533-845d-9061b4ed5b19": Phase="Pending", Reason="", readiness=false. Elapsed: 3.107915ms
Jul  4 08:48:11.293: INFO: Pod "downwardapi-volume-1088aedf-98a1-4533-845d-9061b4ed5b19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007563866s
Jul  4 08:48:13.297: INFO: Pod "downwardapi-volume-1088aedf-98a1-4533-845d-9061b4ed5b19": Phase="Running", Reason="", readiness=true. Elapsed: 4.012053189s
Jul  4 08:48:15.302: INFO: Pod "downwardapi-volume-1088aedf-98a1-4533-845d-9061b4ed5b19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016236591s
STEP: Saw pod success
Jul  4 08:48:15.302: INFO: Pod "downwardapi-volume-1088aedf-98a1-4533-845d-9061b4ed5b19" satisfied condition "success or failure"
Jul  4 08:48:15.304: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1088aedf-98a1-4533-845d-9061b4ed5b19 container client-container: 
STEP: delete the pod
Jul  4 08:48:15.370: INFO: Waiting for pod downwardapi-volume-1088aedf-98a1-4533-845d-9061b4ed5b19 to disappear
Jul  4 08:48:15.385: INFO: Pod downwardapi-volume-1088aedf-98a1-4533-845d-9061b4ed5b19 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:48:15.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8534" for this suite.

• [SLOW TEST:6.226 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1131,"failed":0}
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:48:15.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jul  4 08:48:15.466: INFO: PodSpec: initContainers in spec.initContainers
Jul  4 08:49:01.488: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-52c72c1b-81a1-4aab-a088-427a23aca46a", GenerateName:"", Namespace:"init-container-6809", SelfLink:"/api/v1/namespaces/init-container-6809/pods/pod-init-52c72c1b-81a1-4aab-a088-427a23aca46a", UID:"f7ce3200-530d-4283-9200-37ca09b6501b", ResourceVersion:"12185", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729449295, loc:(*time.Location)(0x78f7140)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"466030794"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-z7klj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0015fc900), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-z7klj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-z7klj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-z7klj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002f6d658), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00204ec60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f6d6e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f6d700)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002f6d708), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002f6d70c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729449295, loc:(*time.Location)(0x78f7140)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729449295, loc:(*time.Location)(0x78f7140)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729449295, loc:(*time.Location)(0x78f7140)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729449295, loc:(*time.Location)(0x78f7140)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.47", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.47"}}, StartTime:(*v1.Time)(0xc001645560), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0016a2700)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0016a27e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://8c9d54f19a4f2d74071889c64457234d7519a101d5d90458cbb16ca8d8a5a659", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0016455e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0016455a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002f6d78f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:49:01.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6809" for this suite.

• [SLOW TEST:46.163 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":73,"skipped":1134,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:49:01.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-fbd6d912-f8bd-4540-9031-f1693030002d
STEP: Creating a pod to test consume secrets
Jul  4 08:49:01.687: INFO: Waiting up to 5m0s for pod "pod-secrets-33761e39-81c9-4656-a0b2-3eb0c7a1aef1" in namespace "secrets-4105" to be "success or failure"
Jul  4 08:49:01.692: INFO: Pod "pod-secrets-33761e39-81c9-4656-a0b2-3eb0c7a1aef1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.360595ms
Jul  4 08:49:03.706: INFO: Pod "pod-secrets-33761e39-81c9-4656-a0b2-3eb0c7a1aef1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018154001s
Jul  4 08:49:05.710: INFO: Pod "pod-secrets-33761e39-81c9-4656-a0b2-3eb0c7a1aef1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022571531s
STEP: Saw pod success
Jul  4 08:49:05.710: INFO: Pod "pod-secrets-33761e39-81c9-4656-a0b2-3eb0c7a1aef1" satisfied condition "success or failure"
Jul  4 08:49:05.713: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-33761e39-81c9-4656-a0b2-3eb0c7a1aef1 container secret-volume-test: 
STEP: delete the pod
Jul  4 08:49:05.857: INFO: Waiting for pod pod-secrets-33761e39-81c9-4656-a0b2-3eb0c7a1aef1 to disappear
Jul  4 08:49:05.990: INFO: Pod pod-secrets-33761e39-81c9-4656-a0b2-3eb0c7a1aef1 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:49:05.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4105" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1139,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:49:06.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul  4 08:49:10.112: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:49:10.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5832" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1160,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:49:10.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-6822, will wait for the garbage collector to delete the pods
Jul  4 08:49:16.623: INFO: Deleting Job.batch foo took: 35.082565ms
Jul  4 08:49:16.923: INFO: Terminating Job.batch foo pods took: 300.319287ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:49:56.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6822" for this suite.

• [SLOW TEST:45.907 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":76,"skipped":1209,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:49:56.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-wwl5
STEP: Creating a pod to test atomic-volume-subpath
Jul  4 08:49:56.438: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-wwl5" in namespace "subpath-3972" to be "success or failure"
Jul  4 08:49:56.458: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Pending", Reason="", readiness=false. Elapsed: 19.827742ms
Jul  4 08:49:58.617: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178951972s
Jul  4 08:50:00.621: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 4.18270258s
Jul  4 08:50:02.625: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 6.186717045s
Jul  4 08:50:04.630: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 8.191183056s
Jul  4 08:50:06.639: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 10.200608958s
Jul  4 08:50:08.643: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 12.204521433s
Jul  4 08:50:10.646: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 14.207848752s
Jul  4 08:50:12.650: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 16.211703657s
Jul  4 08:50:14.655: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 18.216765322s
Jul  4 08:50:16.659: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 20.220999189s
Jul  4 08:50:18.663: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 22.224561192s
Jul  4 08:50:20.701: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 24.262585637s
Jul  4 08:50:22.706: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.267007718s
STEP: Saw pod success
Jul  4 08:50:22.706: INFO: Pod "pod-subpath-test-projected-wwl5" satisfied condition "success or failure"
Jul  4 08:50:22.708: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-wwl5 container test-container-subpath-projected-wwl5: 
STEP: delete the pod
Jul  4 08:50:23.051: INFO: Waiting for pod pod-subpath-test-projected-wwl5 to disappear
Jul  4 08:50:23.058: INFO: Pod pod-subpath-test-projected-wwl5 no longer exists
STEP: Deleting pod pod-subpath-test-projected-wwl5
Jul  4 08:50:23.058: INFO: Deleting pod "pod-subpath-test-projected-wwl5" in namespace "subpath-3972"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:50:23.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3972" for this suite.

• [SLOW TEST:26.848 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":77,"skipped":1220,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:50:23.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jul  4 08:50:29.647: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7295 PodName:pod-sharedvolume-23b2afc7-124f-4a6d-95c5-1b23e7ba98a1 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 08:50:29.647: INFO: >>> kubeConfig: /root/.kube/config
I0704 08:50:29.683232       6 log.go:172] (0xc004490210) (0xc001b12f00) Create stream
I0704 08:50:29.683263       6 log.go:172] (0xc004490210) (0xc001b12f00) Stream added, broadcasting: 1
I0704 08:50:29.685524       6 log.go:172] (0xc004490210) Reply frame received for 1
I0704 08:50:29.685571       6 log.go:172] (0xc004490210) (0xc001d8e140) Create stream
I0704 08:50:29.685592       6 log.go:172] (0xc004490210) (0xc001d8e140) Stream added, broadcasting: 3
I0704 08:50:29.686404       6 log.go:172] (0xc004490210) Reply frame received for 3
I0704 08:50:29.686424       6 log.go:172] (0xc004490210) (0xc001b12fa0) Create stream
I0704 08:50:29.686433       6 log.go:172] (0xc004490210) (0xc001b12fa0) Stream added, broadcasting: 5
I0704 08:50:29.687193       6 log.go:172] (0xc004490210) Reply frame received for 5
I0704 08:50:29.756055       6 log.go:172] (0xc004490210) Data frame received for 3
I0704 08:50:29.756089       6 log.go:172] (0xc001d8e140) (3) Data frame handling
I0704 08:50:29.756098       6 log.go:172] (0xc001d8e140) (3) Data frame sent
I0704 08:50:29.756107       6 log.go:172] (0xc004490210) Data frame received for 3
I0704 08:50:29.756124       6 log.go:172] (0xc001d8e140) (3) Data frame handling
I0704 08:50:29.756164       6 log.go:172] (0xc004490210) Data frame received for 5
I0704 08:50:29.756205       6 log.go:172] (0xc001b12fa0) (5) Data frame handling
I0704 08:50:29.757610       6 log.go:172] (0xc004490210) Data frame received for 1
I0704 08:50:29.757631       6 log.go:172] (0xc001b12f00) (1) Data frame handling
I0704 08:50:29.757640       6 log.go:172] (0xc001b12f00) (1) Data frame sent
I0704 08:50:29.757653       6 log.go:172] (0xc004490210) (0xc001b12f00) Stream removed, broadcasting: 1
I0704 08:50:29.757673       6 log.go:172] (0xc004490210) Go away received
I0704 08:50:29.757760       6 log.go:172] (0xc004490210) (0xc001b12f00) Stream removed, broadcasting: 1
I0704 08:50:29.757780       6 log.go:172] (0xc004490210) (0xc001d8e140) Stream removed, broadcasting: 3
I0704 08:50:29.757790       6 log.go:172] (0xc004490210) (0xc001b12fa0) Stream removed, broadcasting: 5
Jul  4 08:50:29.757: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:50:29.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7295" for this suite.

• [SLOW TEST:6.578 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":78,"skipped":1274,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:50:29.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jul  4 08:50:36.497: INFO: Successfully updated pod "labelsupdate28e89a89-f33e-4cde-b7d9-7661a119c1b0"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:50:38.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3335" for this suite.

• [SLOW TEST:8.779 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1301,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:50:38.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-6a68ec0c-5587-4a17-80b9-8fff89275f09
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:50:46.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8168" for this suite.

• [SLOW TEST:7.463 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1307,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:50:46.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jul  4 08:50:46.163: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2805 /api/v1/namespaces/watch-2805/configmaps/e2e-watch-test-resource-version 7543c0e5-cd9a-4d1d-8eca-746fc39e525d 12704 0 2020-07-04 08:50:46 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul  4 08:50:46.163: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2805 /api/v1/namespaces/watch-2805/configmaps/e2e-watch-test-resource-version 7543c0e5-cd9a-4d1d-8eca-746fc39e525d 12705 0 2020-07-04 08:50:46 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:50:46.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2805" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":81,"skipped":1360,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:50:46.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 08:50:46.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul  4 08:50:49.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7522 create -f -'
Jul  4 08:50:53.488: INFO: stderr: ""
Jul  4 08:50:53.488: INFO: stdout: "e2e-test-crd-publish-openapi-963-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jul  4 08:50:53.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7522 delete e2e-test-crd-publish-openapi-963-crds test-cr'
Jul  4 08:50:53.626: INFO: stderr: ""
Jul  4 08:50:53.626: INFO: stdout: "e2e-test-crd-publish-openapi-963-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Jul  4 08:50:53.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7522 apply -f -'
Jul  4 08:50:53.883: INFO: stderr: ""
Jul  4 08:50:53.884: INFO: stdout: "e2e-test-crd-publish-openapi-963-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jul  4 08:50:53.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7522 delete e2e-test-crd-publish-openapi-963-crds test-cr'
Jul  4 08:50:54.024: INFO: stderr: ""
Jul  4 08:50:54.024: INFO: stdout: "e2e-test-crd-publish-openapi-963-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jul  4 08:50:54.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-963-crds'
Jul  4 08:50:54.255: INFO: stderr: ""
Jul  4 08:50:54.255: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-963-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:50:57.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7522" for this suite.

• [SLOW TEST:10.964 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":82,"skipped":1417,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:50:57.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 08:50:57.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3577'
Jul  4 08:50:57.881: INFO: stderr: ""
Jul  4 08:50:57.881: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Jul  4 08:50:57.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3577'
Jul  4 08:50:58.332: INFO: stderr: ""
Jul  4 08:50:58.332: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jul  4 08:50:59.336: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:50:59.336: INFO: Found 0 / 1
Jul  4 08:51:00.338: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:51:00.338: INFO: Found 0 / 1
Jul  4 08:51:01.540: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:51:01.540: INFO: Found 0 / 1
Jul  4 08:51:02.384: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:51:02.384: INFO: Found 0 / 1
Jul  4 08:51:03.411: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:51:03.411: INFO: Found 0 / 1
Jul  4 08:51:04.373: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:51:04.373: INFO: Found 1 / 1
Jul  4 08:51:04.373: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul  4 08:51:04.376: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:51:04.376: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul  4 08:51:04.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-82dmt --namespace=kubectl-3577'
Jul  4 08:51:04.488: INFO: stderr: ""
Jul  4 08:51:04.488: INFO: stdout: "Name:         agnhost-master-82dmt\nNamespace:    kubectl-3577\nPriority:     0\nNode:         jerma-worker/172.17.0.10\nStart Time:   Sat, 04 Jul 2020 08:50:57 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.1.37\nIPs:\n  IP:           10.244.1.37\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://cac17c9a77089599b24a1f32c0ba979dd071d11237f894aba750d3a4a718eefe\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 04 Jul 2020 08:51:03 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vwz99 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-vwz99:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-vwz99\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  7s    default-scheduler      Successfully assigned kubectl-3577/agnhost-master-82dmt to jerma-worker\n  Normal  Pulled     5s    kubelet, jerma-worker  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    2s    kubelet, jerma-worker  Created container agnhost-master\n  Normal  Started    1s    kubelet, jerma-worker  Started container agnhost-master\n"
Jul  4 08:51:04.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-3577'
Jul  4 08:51:04.616: INFO: stderr: ""
Jul  4 08:51:04.616: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-3577\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  7s    replication-controller  Created pod: agnhost-master-82dmt\n"
Jul  4 08:51:04.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-3577'
Jul  4 08:51:04.721: INFO: stderr: ""
Jul  4 08:51:04.721: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-3577\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.106.172.38\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.37:6379\nSession Affinity:  None\nEvents:            \n"
Jul  4 08:51:04.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane'
Jul  4 08:51:04.861: INFO: stderr: ""
Jul  4 08:51:04.861: INFO: stdout: "Name:               jerma-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jul 2020 07:50:20 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-control-plane\n  AcquireTime:     \n  RenewTime:       Sat, 04 Jul 2020 08:51:02 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Sat, 04 Jul 2020 08:50:58 +0000   Sat, 04 Jul 2020 07:50:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Sat, 04 Jul 2020 08:50:58 +0000   Sat, 04 Jul 2020 07:50:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Sat, 04 Jul 2020 08:50:58 +0000   Sat, 04 Jul 2020 07:50:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Sat, 04 Jul 2020 08:50:58 +0000   Sat, 04 Jul 2020 07:50:54 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.17.0.9\n  Hostname:    jerma-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 38019c037cfd4087a82e4827871389a4\n  System UUID:                e9de5062-4fa9-4d0b-8ec1-e753d472da92\n  Boot ID:                    ca2aa731-f890-4956-92a1-ff8c7560d571\n  Kernel Version:             4.15.0-88-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.17.5\n  Kube-Proxy Version:         v1.17.5\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-6955765f44-pgl6s                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     60m\n  kube-system                 coredns-6955765f44-wm87j                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     60m\n  kube-system                 etcd-jerma-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         60m\n  kube-system                 kindnet-8r2ht                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      60m\n  kube-system                 kube-apiserver-jerma-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         60m\n  kube-system                 kube-controller-manager-jerma-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         60m\n  kube-system                 kube-proxy-c7j2b                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         60m\n  kube-system                 kube-scheduler-jerma-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         60m\n  local-path-storage          local-path-provisioner-58f6947c7-87vc8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         60m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:\n  Type     Reason                   Age                From                             Message\n  ----     ------                   ----               ----                             -------\n  Normal   Starting                 60m                kubelet, jerma-control-plane     Starting kubelet.\n  Normal   NodeHasNoDiskPressure    60m (x4 over 60m)  kubelet, jerma-control-plane     Node jerma-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID     60m (x4 over 60m)  kubelet, jerma-control-plane     Node jerma-control-plane status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced  60m                kubelet, jerma-control-plane     Updated Node Allocatable limit across pods\n  Normal   NodeHasSufficientMemory  60m (x5 over 60m)  kubelet, jerma-control-plane     Node jerma-control-plane status is now: NodeHasSufficientMemory\n  Normal   NodeHasSufficientMemory  60m                kubelet, jerma-control-plane     Node jerma-control-plane status is now: NodeHasSufficientMemory\n  Normal   Starting                 60m                kubelet, jerma-control-plane     Starting kubelet.\n  Normal   NodeHasNoDiskPressure    60m                kubelet, jerma-control-plane     Node jerma-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID     60m                kubelet, jerma-control-plane     Node jerma-control-plane status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced  60m                kubelet, jerma-control-plane     Updated Node Allocatable limit across pods\n  Warning  readOnlySysFS            60m                kube-proxy, jerma-control-plane  CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)\n  Normal   Starting                 60m                kube-proxy, jerma-control-plane  Starting kube-proxy.\n  Normal   NodeReady                60m                kubelet, jerma-control-plane     Node jerma-control-plane status is now: NodeReady\n"
Jul  4 08:51:04.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-3577'
Jul  4 08:51:04.966: INFO: stderr: ""
Jul  4 08:51:04.966: INFO: stdout: "Name:         kubectl-3577\nLabels:       e2e-framework=kubectl\n              e2e-run=495c0ca3-30ba-4919-ac44-c0ef702cd874\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:51:04.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3577" for this suite.

• [SLOW TEST:7.836 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":83,"skipped":1419,"failed":0}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:51:04.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-8f027dac-068d-4af8-b42b-b37f23727056
STEP: Creating a pod to test consume configMaps
Jul  4 08:51:05.188: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-29f1f025-bb32-4652-9baa-7ea85f448553" in namespace "projected-9526" to be "success or failure"
Jul  4 08:51:05.271: INFO: Pod "pod-projected-configmaps-29f1f025-bb32-4652-9baa-7ea85f448553": Phase="Pending", Reason="", readiness=false. Elapsed: 83.036341ms
Jul  4 08:51:07.275: INFO: Pod "pod-projected-configmaps-29f1f025-bb32-4652-9baa-7ea85f448553": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087215629s
Jul  4 08:51:09.279: INFO: Pod "pod-projected-configmaps-29f1f025-bb32-4652-9baa-7ea85f448553": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091203453s
Jul  4 08:51:11.330: INFO: Pod "pod-projected-configmaps-29f1f025-bb32-4652-9baa-7ea85f448553": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142685329s
Jul  4 08:51:13.408: INFO: Pod "pod-projected-configmaps-29f1f025-bb32-4652-9baa-7ea85f448553": Phase="Running", Reason="", readiness=true. Elapsed: 8.219904877s
Jul  4 08:51:15.517: INFO: Pod "pod-projected-configmaps-29f1f025-bb32-4652-9baa-7ea85f448553": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.329618616s
STEP: Saw pod success
Jul  4 08:51:15.517: INFO: Pod "pod-projected-configmaps-29f1f025-bb32-4652-9baa-7ea85f448553" satisfied condition "success or failure"
Jul  4 08:51:15.588: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-29f1f025-bb32-4652-9baa-7ea85f448553 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  4 08:51:15.781: INFO: Waiting for pod pod-projected-configmaps-29f1f025-bb32-4652-9baa-7ea85f448553 to disappear
Jul  4 08:51:16.008: INFO: Pod pod-projected-configmaps-29f1f025-bb32-4652-9baa-7ea85f448553 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:51:16.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9526" for this suite.

• [SLOW TEST:11.375 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1422,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:51:16.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul  4 08:51:25.894: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:51:25.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9326" for this suite.

• [SLOW TEST:9.612 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1456,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:51:25.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul  4 08:51:26.032: INFO: Waiting up to 5m0s for pod "pod-5b5e1ab4-4610-4706-974e-d75a42047c16" in namespace "emptydir-1280" to be "success or failure"
Jul  4 08:51:26.036: INFO: Pod "pod-5b5e1ab4-4610-4706-974e-d75a42047c16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.547419ms
Jul  4 08:51:29.170: INFO: Pod "pod-5b5e1ab4-4610-4706-974e-d75a42047c16": Phase="Pending", Reason="", readiness=false. Elapsed: 3.138833942s
Jul  4 08:51:31.540: INFO: Pod "pod-5b5e1ab4-4610-4706-974e-d75a42047c16": Phase="Pending", Reason="", readiness=false. Elapsed: 5.508503878s
Jul  4 08:51:33.546: INFO: Pod "pod-5b5e1ab4-4610-4706-974e-d75a42047c16": Phase="Pending", Reason="", readiness=false. Elapsed: 7.514578976s
Jul  4 08:51:35.726: INFO: Pod "pod-5b5e1ab4-4610-4706-974e-d75a42047c16": Phase="Pending", Reason="", readiness=false. Elapsed: 9.694763312s
Jul  4 08:51:37.730: INFO: Pod "pod-5b5e1ab4-4610-4706-974e-d75a42047c16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.698789258s
STEP: Saw pod success
Jul  4 08:51:37.730: INFO: Pod "pod-5b5e1ab4-4610-4706-974e-d75a42047c16" satisfied condition "success or failure"
Jul  4 08:51:37.733: INFO: Trying to get logs from node jerma-worker pod pod-5b5e1ab4-4610-4706-974e-d75a42047c16 container test-container: 
STEP: delete the pod
Jul  4 08:51:37.750: INFO: Waiting for pod pod-5b5e1ab4-4610-4706-974e-d75a42047c16 to disappear
Jul  4 08:51:37.840: INFO: Pod pod-5b5e1ab4-4610-4706-974e-d75a42047c16 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:51:37.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1280" for this suite.

• [SLOW TEST:11.887 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1462,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:51:37.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-1926
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1926 to expose endpoints map[]
Jul  4 08:51:38.037: INFO: Get endpoints failed (18.24488ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jul  4 08:51:39.535: INFO: successfully validated that service endpoint-test2 in namespace services-1926 exposes endpoints map[] (1.515914908s elapsed)
STEP: Creating pod pod1 in namespace services-1926
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1926 to expose endpoints map[pod1:[80]]
Jul  4 08:51:43.752: INFO: successfully validated that service endpoint-test2 in namespace services-1926 exposes endpoints map[pod1:[80]] (4.191166388s elapsed)
STEP: Creating pod pod2 in namespace services-1926
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1926 to expose endpoints map[pod1:[80] pod2:[80]]
Jul  4 08:51:47.166: INFO: successfully validated that service endpoint-test2 in namespace services-1926 exposes endpoints map[pod1:[80] pod2:[80]] (3.411380334s elapsed)
STEP: Deleting pod pod1 in namespace services-1926
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1926 to expose endpoints map[pod2:[80]]
Jul  4 08:51:48.213: INFO: successfully validated that service endpoint-test2 in namespace services-1926 exposes endpoints map[pod2:[80]] (1.031428019s elapsed)
STEP: Deleting pod pod2 in namespace services-1926
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1926 to expose endpoints map[]
Jul  4 08:51:49.264: INFO: successfully validated that service endpoint-test2 in namespace services-1926 exposes endpoints map[] (1.047282906s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:51:49.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1926" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:11.682 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":87,"skipped":1465,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:51:49.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-0765c190-89d0-43d0-9c1b-24d0d2d4234f
STEP: Creating a pod to test consume configMaps
Jul  4 08:51:49.601: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6efacfda-c5cc-4bc1-a88b-c46fbb4e244d" in namespace "projected-7883" to be "success or failure"
Jul  4 08:51:49.622: INFO: Pod "pod-projected-configmaps-6efacfda-c5cc-4bc1-a88b-c46fbb4e244d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.374929ms
Jul  4 08:51:51.642: INFO: Pod "pod-projected-configmaps-6efacfda-c5cc-4bc1-a88b-c46fbb4e244d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041032566s
Jul  4 08:51:53.702: INFO: Pod "pod-projected-configmaps-6efacfda-c5cc-4bc1-a88b-c46fbb4e244d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100715205s
STEP: Saw pod success
Jul  4 08:51:53.702: INFO: Pod "pod-projected-configmaps-6efacfda-c5cc-4bc1-a88b-c46fbb4e244d" satisfied condition "success or failure"
Jul  4 08:51:53.705: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-6efacfda-c5cc-4bc1-a88b-c46fbb4e244d container projected-configmap-volume-test: 
STEP: delete the pod
Jul  4 08:51:53.759: INFO: Waiting for pod pod-projected-configmaps-6efacfda-c5cc-4bc1-a88b-c46fbb4e244d to disappear
Jul  4 08:51:53.767: INFO: Pod pod-projected-configmaps-6efacfda-c5cc-4bc1-a88b-c46fbb4e244d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:51:53.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7883" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1472,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:51:53.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 08:51:53.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Jul  4 08:51:56.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3226 create -f -'
Jul  4 08:52:00.265: INFO: stderr: ""
Jul  4 08:52:00.266: INFO: stdout: "e2e-test-crd-publish-openapi-4847-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jul  4 08:52:00.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3226 delete e2e-test-crd-publish-openapi-4847-crds test-foo'
Jul  4 08:52:00.376: INFO: stderr: ""
Jul  4 08:52:00.376: INFO: stdout: "e2e-test-crd-publish-openapi-4847-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Jul  4 08:52:00.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3226 apply -f -'
Jul  4 08:52:00.640: INFO: stderr: ""
Jul  4 08:52:00.640: INFO: stdout: "e2e-test-crd-publish-openapi-4847-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jul  4 08:52:00.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3226 delete e2e-test-crd-publish-openapi-4847-crds test-foo'
Jul  4 08:52:00.747: INFO: stderr: ""
Jul  4 08:52:00.747: INFO: stdout: "e2e-test-crd-publish-openapi-4847-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Jul  4 08:52:00.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3226 create -f -'
Jul  4 08:52:00.971: INFO: rc: 1
Jul  4 08:52:00.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3226 apply -f -'
Jul  4 08:52:01.218: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Jul  4 08:52:01.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3226 create -f -'
Jul  4 08:52:01.478: INFO: rc: 1
Jul  4 08:52:01.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3226 apply -f -'
Jul  4 08:52:01.715: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Jul  4 08:52:01.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4847-crds'
Jul  4 08:52:01.970: INFO: stderr: ""
Jul  4 08:52:01.970: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4847-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Jul  4 08:52:01.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4847-crds.metadata'
Jul  4 08:52:02.288: INFO: stderr: ""
Jul  4 08:52:02.289: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4847-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Jul  4 08:52:02.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4847-crds.spec'
Jul  4 08:52:02.553: INFO: stderr: ""
Jul  4 08:52:02.553: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4847-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jul  4 08:52:02.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4847-crds.spec.bars'
Jul  4 08:52:03.712: INFO: stderr: ""
Jul  4 08:52:03.712: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4847-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jul  4 08:52:03.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4847-crds.spec.bars2'
Jul  4 08:52:04.356: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:52:07.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3226" for this suite.

• [SLOW TEST:13.476 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":89,"skipped":1473,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:52:07.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-b95965bb-b98f-4a6d-a3e9-df96e34b8bfd
STEP: Creating a pod to test consume configMaps
Jul  4 08:52:07.406: INFO: Waiting up to 5m0s for pod "pod-configmaps-e30204e5-9119-4145-a669-6f09e31d47d2" in namespace "configmap-3973" to be "success or failure"
Jul  4 08:52:07.420: INFO: Pod "pod-configmaps-e30204e5-9119-4145-a669-6f09e31d47d2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.356586ms
Jul  4 08:52:09.425: INFO: Pod "pod-configmaps-e30204e5-9119-4145-a669-6f09e31d47d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01847134s
Jul  4 08:52:11.429: INFO: Pod "pod-configmaps-e30204e5-9119-4145-a669-6f09e31d47d2": Phase="Running", Reason="", readiness=true. Elapsed: 4.023174583s
Jul  4 08:52:13.434: INFO: Pod "pod-configmaps-e30204e5-9119-4145-a669-6f09e31d47d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027576179s
STEP: Saw pod success
Jul  4 08:52:13.434: INFO: Pod "pod-configmaps-e30204e5-9119-4145-a669-6f09e31d47d2" satisfied condition "success or failure"
Jul  4 08:52:13.437: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-e30204e5-9119-4145-a669-6f09e31d47d2 container configmap-volume-test: 
STEP: delete the pod
Jul  4 08:52:13.452: INFO: Waiting for pod pod-configmaps-e30204e5-9119-4145-a669-6f09e31d47d2 to disappear
Jul  4 08:52:13.456: INFO: Pod pod-configmaps-e30204e5-9119-4145-a669-6f09e31d47d2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:52:13.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3973" for this suite.

• [SLOW TEST:6.212 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1495,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:52:13.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Jul  4 08:52:13.537: INFO: Waiting up to 5m0s for pod "client-containers-9b16e8bc-bcb7-4b04-9462-49011ed69970" in namespace "containers-2749" to be "success or failure"
Jul  4 08:52:13.540: INFO: Pod "client-containers-9b16e8bc-bcb7-4b04-9462-49011ed69970": Phase="Pending", Reason="", readiness=false. Elapsed: 3.587285ms
Jul  4 08:52:15.691: INFO: Pod "client-containers-9b16e8bc-bcb7-4b04-9462-49011ed69970": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154362945s
Jul  4 08:52:17.751: INFO: Pod "client-containers-9b16e8bc-bcb7-4b04-9462-49011ed69970": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.21374598s
STEP: Saw pod success
Jul  4 08:52:17.751: INFO: Pod "client-containers-9b16e8bc-bcb7-4b04-9462-49011ed69970" satisfied condition "success or failure"
Jul  4 08:52:17.753: INFO: Trying to get logs from node jerma-worker2 pod client-containers-9b16e8bc-bcb7-4b04-9462-49011ed69970 container test-container: 
STEP: delete the pod
Jul  4 08:52:17.830: INFO: Waiting for pod client-containers-9b16e8bc-bcb7-4b04-9462-49011ed69970 to disappear
Jul  4 08:52:17.906: INFO: Pod client-containers-9b16e8bc-bcb7-4b04-9462-49011ed69970 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:52:17.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2749" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1525,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:52:17.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-de48e78a-6f3f-4e3c-819a-f8d766575dee
STEP: Creating a pod to test consume configMaps
Jul  4 08:52:18.075: INFO: Waiting up to 5m0s for pod "pod-configmaps-b3b9ab3d-a84c-4367-9d5d-03291408bbf4" in namespace "configmap-13" to be "success or failure"
Jul  4 08:52:18.091: INFO: Pod "pod-configmaps-b3b9ab3d-a84c-4367-9d5d-03291408bbf4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.366499ms
Jul  4 08:52:20.103: INFO: Pod "pod-configmaps-b3b9ab3d-a84c-4367-9d5d-03291408bbf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028825637s
Jul  4 08:52:22.339: INFO: Pod "pod-configmaps-b3b9ab3d-a84c-4367-9d5d-03291408bbf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.264003078s
STEP: Saw pod success
Jul  4 08:52:22.339: INFO: Pod "pod-configmaps-b3b9ab3d-a84c-4367-9d5d-03291408bbf4" satisfied condition "success or failure"
Jul  4 08:52:22.342: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-b3b9ab3d-a84c-4367-9d5d-03291408bbf4 container configmap-volume-test: 
STEP: delete the pod
Jul  4 08:52:22.526: INFO: Waiting for pod pod-configmaps-b3b9ab3d-a84c-4367-9d5d-03291408bbf4 to disappear
Jul  4 08:52:22.540: INFO: Pod pod-configmaps-b3b9ab3d-a84c-4367-9d5d-03291408bbf4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:52:22.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-13" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1539,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:52:22.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  4 08:52:23.129: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  4 08:52:25.140: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729449543, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729449543, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729449543, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729449543, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 08:52:27.144: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729449543, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729449543, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729449543, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729449543, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  4 08:52:30.231: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 08:52:30.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:52:31.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9259" for this suite.
STEP: Destroying namespace "webhook-9259-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.766 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":93,"skipped":1632,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:52:32.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Jul  4 08:52:32.453: INFO: Waiting up to 5m0s for pod "var-expansion-9357b0c9-eb81-4c99-870d-86ca52aa74c4" in namespace "var-expansion-3627" to be "success or failure"
Jul  4 08:52:32.463: INFO: Pod "var-expansion-9357b0c9-eb81-4c99-870d-86ca52aa74c4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.257501ms
Jul  4 08:52:34.467: INFO: Pod "var-expansion-9357b0c9-eb81-4c99-870d-86ca52aa74c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013881649s
Jul  4 08:52:36.472: INFO: Pod "var-expansion-9357b0c9-eb81-4c99-870d-86ca52aa74c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018196254s
STEP: Saw pod success
Jul  4 08:52:36.472: INFO: Pod "var-expansion-9357b0c9-eb81-4c99-870d-86ca52aa74c4" satisfied condition "success or failure"
Jul  4 08:52:36.475: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-9357b0c9-eb81-4c99-870d-86ca52aa74c4 container dapi-container: 
STEP: delete the pod
Jul  4 08:52:36.532: INFO: Waiting for pod var-expansion-9357b0c9-eb81-4c99-870d-86ca52aa74c4 to disappear
Jul  4 08:52:36.565: INFO: Pod var-expansion-9357b0c9-eb81-4c99-870d-86ca52aa74c4 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:52:36.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3627" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1643,"failed":0}

------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:52:36.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-3752
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-3752
I0704 08:52:36.896073       6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3752, replica count: 2
I0704 08:52:39.946697       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0704 08:52:42.946948       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  4 08:52:42.946: INFO: Creating new exec pod
Jul  4 08:52:50.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3752 execpodj7rxn -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jul  4 08:52:50.496: INFO: stderr: "I0704 08:52:50.389543    1474 log.go:172] (0xc0007189a0) (0xc00085a140) Create stream\nI0704 08:52:50.389616    1474 log.go:172] (0xc0007189a0) (0xc00085a140) Stream added, broadcasting: 1\nI0704 08:52:50.394868    1474 log.go:172] (0xc0007189a0) Reply frame received for 1\nI0704 08:52:50.394910    1474 log.go:172] (0xc0007189a0) (0xc00064fae0) Create stream\nI0704 08:52:50.394923    1474 log.go:172] (0xc0007189a0) (0xc00064fae0) Stream added, broadcasting: 3\nI0704 08:52:50.395837    1474 log.go:172] (0xc0007189a0) Reply frame received for 3\nI0704 08:52:50.395859    1474 log.go:172] (0xc0007189a0) (0xc00085a1e0) Create stream\nI0704 08:52:50.395868    1474 log.go:172] (0xc0007189a0) (0xc00085a1e0) Stream added, broadcasting: 5\nI0704 08:52:50.396790    1474 log.go:172] (0xc0007189a0) Reply frame received for 5\nI0704 08:52:50.490161    1474 log.go:172] (0xc0007189a0) Data frame received for 5\nI0704 08:52:50.490186    1474 log.go:172] (0xc00085a1e0) (5) Data frame handling\nI0704 08:52:50.490205    1474 log.go:172] (0xc00085a1e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0704 08:52:50.490798    1474 log.go:172] (0xc0007189a0) Data frame received for 5\nI0704 08:52:50.490820    1474 log.go:172] (0xc00085a1e0) (5) Data frame handling\nI0704 08:52:50.490843    1474 log.go:172] (0xc00085a1e0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0704 08:52:50.491138    1474 log.go:172] (0xc0007189a0) Data frame received for 5\nI0704 08:52:50.491161    1474 log.go:172] (0xc00085a1e0) (5) Data frame handling\nI0704 08:52:50.491421    1474 log.go:172] (0xc0007189a0) Data frame received for 3\nI0704 08:52:50.491433    1474 log.go:172] (0xc00064fae0) (3) Data frame handling\nI0704 08:52:50.493046    1474 log.go:172] (0xc0007189a0) Data frame received for 1\nI0704 08:52:50.493076    1474 log.go:172] (0xc00085a140) (1) Data frame handling\nI0704 08:52:50.493088    1474 log.go:172] (0xc00085a140) (1) Data frame sent\nI0704 08:52:50.493108    1474 log.go:172] (0xc0007189a0) (0xc00085a140) Stream removed, broadcasting: 1\nI0704 08:52:50.493260    1474 log.go:172] (0xc0007189a0) Go away received\nI0704 08:52:50.493641    1474 log.go:172] (0xc0007189a0) (0xc00085a140) Stream removed, broadcasting: 1\nI0704 08:52:50.493657    1474 log.go:172] (0xc0007189a0) (0xc00064fae0) Stream removed, broadcasting: 3\nI0704 08:52:50.493663    1474 log.go:172] (0xc0007189a0) (0xc00085a1e0) Stream removed, broadcasting: 5\n"
Jul  4 08:52:50.497: INFO: stdout: ""
Jul  4 08:52:50.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3752 execpodj7rxn -- /bin/sh -x -c nc -zv -t -w 2 10.103.102.17 80'
Jul  4 08:52:50.683: INFO: stderr: "I0704 08:52:50.611510    1496 log.go:172] (0xc000936000) (0xc000aaa000) Create stream\nI0704 08:52:50.611563    1496 log.go:172] (0xc000936000) (0xc000aaa000) Stream added, broadcasting: 1\nI0704 08:52:50.614508    1496 log.go:172] (0xc000936000) Reply frame received for 1\nI0704 08:52:50.614548    1496 log.go:172] (0xc000936000) (0xc0009ec000) Create stream\nI0704 08:52:50.614565    1496 log.go:172] (0xc000936000) (0xc0009ec000) Stream added, broadcasting: 3\nI0704 08:52:50.615348    1496 log.go:172] (0xc000936000) Reply frame received for 3\nI0704 08:52:50.615386    1496 log.go:172] (0xc000936000) (0xc000aaa0a0) Create stream\nI0704 08:52:50.615397    1496 log.go:172] (0xc000936000) (0xc000aaa0a0) Stream added, broadcasting: 5\nI0704 08:52:50.616137    1496 log.go:172] (0xc000936000) Reply frame received for 5\nI0704 08:52:50.678437    1496 log.go:172] (0xc000936000) Data frame received for 5\nI0704 08:52:50.678470    1496 log.go:172] (0xc000aaa0a0) (5) Data frame handling\nI0704 08:52:50.678482    1496 log.go:172] (0xc000aaa0a0) (5) Data frame sent\nI0704 08:52:50.678490    1496 log.go:172] (0xc000936000) Data frame received for 5\nI0704 08:52:50.678497    1496 log.go:172] (0xc000aaa0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.102.17 80\nConnection to 10.103.102.17 80 port [tcp/http] succeeded!\nI0704 08:52:50.678519    1496 log.go:172] (0xc000936000) Data frame received for 3\nI0704 08:52:50.678527    1496 log.go:172] (0xc0009ec000) (3) Data frame handling\nI0704 08:52:50.679657    1496 log.go:172] (0xc000936000) Data frame received for 1\nI0704 08:52:50.679687    1496 log.go:172] (0xc000aaa000) (1) Data frame handling\nI0704 08:52:50.679711    1496 log.go:172] (0xc000aaa000) (1) Data frame sent\nI0704 08:52:50.679750    1496 log.go:172] (0xc000936000) (0xc000aaa000) Stream removed, broadcasting: 1\nI0704 08:52:50.679775    1496 log.go:172] (0xc000936000) Go away received\nI0704 08:52:50.680096    1496 log.go:172] (0xc000936000) (0xc000aaa000) Stream removed, broadcasting: 1\nI0704 08:52:50.680111    1496 log.go:172] (0xc000936000) (0xc0009ec000) Stream removed, broadcasting: 3\nI0704 08:52:50.680117    1496 log.go:172] (0xc000936000) (0xc000aaa0a0) Stream removed, broadcasting: 5\n"
Jul  4 08:52:50.683: INFO: stdout: ""
Jul  4 08:52:50.683: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:52:50.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3752" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:14.176 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":95,"skipped":1643,"failed":0}
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:52:50.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul  4 08:52:50.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9500'
Jul  4 08:52:50.958: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  4 08:52:50.958: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495
Jul  4 08:52:54.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-9500'
Jul  4 08:52:54.887: INFO: stderr: ""
Jul  4 08:52:54.887: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:52:54.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9500" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":96,"skipped":1643,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:52:55.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-4576b5b7-eafc-4274-b7b9-0debf22e60c8
STEP: Creating a pod to test consume secrets
Jul  4 08:52:56.497: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-71210f6b-5bb4-4338-93f5-f94400236564" in namespace "projected-7022" to be "success or failure"
Jul  4 08:52:56.704: INFO: Pod "pod-projected-secrets-71210f6b-5bb4-4338-93f5-f94400236564": Phase="Pending", Reason="", readiness=false. Elapsed: 206.865609ms
Jul  4 08:52:58.824: INFO: Pod "pod-projected-secrets-71210f6b-5bb4-4338-93f5-f94400236564": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32699654s
Jul  4 08:53:02.076: INFO: Pod "pod-projected-secrets-71210f6b-5bb4-4338-93f5-f94400236564": Phase="Pending", Reason="", readiness=false. Elapsed: 5.578774916s
Jul  4 08:53:04.705: INFO: Pod "pod-projected-secrets-71210f6b-5bb4-4338-93f5-f94400236564": Phase="Pending", Reason="", readiness=false. Elapsed: 8.20783262s
Jul  4 08:53:07.310: INFO: Pod "pod-projected-secrets-71210f6b-5bb4-4338-93f5-f94400236564": Phase="Pending", Reason="", readiness=false. Elapsed: 10.812371256s
Jul  4 08:53:10.143: INFO: Pod "pod-projected-secrets-71210f6b-5bb4-4338-93f5-f94400236564": Phase="Running", Reason="", readiness=true. Elapsed: 13.645732456s
Jul  4 08:53:12.303: INFO: Pod "pod-projected-secrets-71210f6b-5bb4-4338-93f5-f94400236564": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.805718834s
STEP: Saw pod success
Jul  4 08:53:12.303: INFO: Pod "pod-projected-secrets-71210f6b-5bb4-4338-93f5-f94400236564" satisfied condition "success or failure"
Jul  4 08:53:12.383: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-71210f6b-5bb4-4338-93f5-f94400236564 container secret-volume-test: 
STEP: delete the pod
Jul  4 08:53:13.335: INFO: Waiting for pod pod-projected-secrets-71210f6b-5bb4-4338-93f5-f94400236564 to disappear
Jul  4 08:53:13.399: INFO: Pod pod-projected-secrets-71210f6b-5bb4-4338-93f5-f94400236564 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:53:13.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7022" for this suite.

• [SLOW TEST:19.023 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1644,"failed":0}
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:53:14.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-7cecd10c-3cc5-4358-8216-78ef38e701c2
STEP: Creating a pod to test consume secrets
Jul  4 08:53:14.728: INFO: Waiting up to 5m0s for pod "pod-secrets-66eb2ab4-2c9c-4b19-975b-295539e16a6c" in namespace "secrets-6005" to be "success or failure"
Jul  4 08:53:14.895: INFO: Pod "pod-secrets-66eb2ab4-2c9c-4b19-975b-295539e16a6c": Phase="Pending", Reason="", readiness=false. Elapsed: 167.185621ms
Jul  4 08:53:16.898: INFO: Pod "pod-secrets-66eb2ab4-2c9c-4b19-975b-295539e16a6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170361013s
Jul  4 08:53:20.183: INFO: Pod "pod-secrets-66eb2ab4-2c9c-4b19-975b-295539e16a6c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.455387044s
Jul  4 08:53:22.513: INFO: Pod "pod-secrets-66eb2ab4-2c9c-4b19-975b-295539e16a6c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.784736434s
Jul  4 08:53:24.517: INFO: Pod "pod-secrets-66eb2ab4-2c9c-4b19-975b-295539e16a6c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.789295088s
Jul  4 08:53:26.520: INFO: Pod "pod-secrets-66eb2ab4-2c9c-4b19-975b-295539e16a6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.792387918s
STEP: Saw pod success
Jul  4 08:53:26.520: INFO: Pod "pod-secrets-66eb2ab4-2c9c-4b19-975b-295539e16a6c" satisfied condition "success or failure"
Jul  4 08:53:26.522: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-66eb2ab4-2c9c-4b19-975b-295539e16a6c container secret-volume-test: 
STEP: delete the pod
Jul  4 08:53:26.543: INFO: Waiting for pod pod-secrets-66eb2ab4-2c9c-4b19-975b-295539e16a6c to disappear
Jul  4 08:53:26.614: INFO: Pod pod-secrets-66eb2ab4-2c9c-4b19-975b-295539e16a6c no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:53:26.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6005" for this suite.

• [SLOW TEST:12.859 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1647,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:53:26.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:53:38.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3138" for this suite.

• [SLOW TEST:11.515 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":99,"skipped":1661,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:53:38.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-8ef86d3e-26c7-4fd5-a27c-f92e281a6de1
STEP: Creating a pod to test consume secrets
Jul  4 08:53:38.989: INFO: Waiting up to 5m0s for pod "pod-secrets-03147058-2a19-4667-b47e-c927a312402d" in namespace "secrets-5239" to be "success or failure"
Jul  4 08:53:39.018: INFO: Pod "pod-secrets-03147058-2a19-4667-b47e-c927a312402d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.524929ms
Jul  4 08:53:41.184: INFO: Pod "pod-secrets-03147058-2a19-4667-b47e-c927a312402d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194004913s
Jul  4 08:53:44.465: INFO: Pod "pod-secrets-03147058-2a19-4667-b47e-c927a312402d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.475862506s
STEP: Saw pod success
Jul  4 08:53:44.465: INFO: Pod "pod-secrets-03147058-2a19-4667-b47e-c927a312402d" satisfied condition "success or failure"
Jul  4 08:53:44.467: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-03147058-2a19-4667-b47e-c927a312402d container secret-env-test: 
STEP: delete the pod
Jul  4 08:53:44.805: INFO: Waiting for pod pod-secrets-03147058-2a19-4667-b47e-c927a312402d to disappear
Jul  4 08:53:44.820: INFO: Pod pod-secrets-03147058-2a19-4667-b47e-c927a312402d no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:53:44.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5239" for this suite.

• [SLOW TEST:6.457 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1698,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:53:44.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul  4 08:53:44.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-7785'
Jul  4 08:53:45.072: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  4 08:53:45.072: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631
Jul  4 08:53:47.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-7785'
Jul  4 08:53:47.338: INFO: stderr: ""
Jul  4 08:53:47.338: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:53:47.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7785" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":278,"completed":101,"skipped":1724,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:53:47.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 08:53:47.784: INFO: Creating ReplicaSet my-hostname-basic-b6795667-22bb-4304-8c95-abd7a8aa9f16
Jul  4 08:53:47.926: INFO: Pod name my-hostname-basic-b6795667-22bb-4304-8c95-abd7a8aa9f16: Found 0 pods out of 1
Jul  4 08:53:52.930: INFO: Pod name my-hostname-basic-b6795667-22bb-4304-8c95-abd7a8aa9f16: Found 1 pods out of 1
Jul  4 08:53:52.930: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-b6795667-22bb-4304-8c95-abd7a8aa9f16" is running
Jul  4 08:53:52.938: INFO: Pod "my-hostname-basic-b6795667-22bb-4304-8c95-abd7a8aa9f16-xb674" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-04 08:53:48 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-04 08:53:52 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-04 08:53:52 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-04 08:53:48 +0000 UTC Reason: Message:}])
Jul  4 08:53:52.938: INFO: Trying to dial the pod
Jul  4 08:53:57.950: INFO: Controller my-hostname-basic-b6795667-22bb-4304-8c95-abd7a8aa9f16: Got expected result from replica 1 [my-hostname-basic-b6795667-22bb-4304-8c95-abd7a8aa9f16-xb674]: "my-hostname-basic-b6795667-22bb-4304-8c95-abd7a8aa9f16-xb674", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:53:57.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6095" for this suite.

• [SLOW TEST:10.612 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":102,"skipped":1740,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:53:57.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:53:58.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5400" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1753,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:53:58.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  4 08:53:58.225: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30b94fc9-4251-4ea1-8132-c1ce8c731dbd" in namespace "downward-api-7265" to be "success or failure"
Jul  4 08:53:58.246: INFO: Pod "downwardapi-volume-30b94fc9-4251-4ea1-8132-c1ce8c731dbd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.469473ms
Jul  4 08:54:00.250: INFO: Pod "downwardapi-volume-30b94fc9-4251-4ea1-8132-c1ce8c731dbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024510647s
Jul  4 08:54:02.254: INFO: Pod "downwardapi-volume-30b94fc9-4251-4ea1-8132-c1ce8c731dbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029060149s
STEP: Saw pod success
Jul  4 08:54:02.254: INFO: Pod "downwardapi-volume-30b94fc9-4251-4ea1-8132-c1ce8c731dbd" satisfied condition "success or failure"
Jul  4 08:54:02.258: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-30b94fc9-4251-4ea1-8132-c1ce8c731dbd container client-container: 
STEP: delete the pod
Jul  4 08:54:02.278: INFO: Waiting for pod downwardapi-volume-30b94fc9-4251-4ea1-8132-c1ce8c731dbd to disappear
Jul  4 08:54:02.282: INFO: Pod downwardapi-volume-30b94fc9-4251-4ea1-8132-c1ce8c731dbd no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:54:02.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7265" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1773,"failed":0}
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:54:02.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 08:54:02.406: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jul  4 08:54:02.432: INFO: Number of nodes with available pods: 0
Jul  4 08:54:02.432: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jul  4 08:54:02.474: INFO: Number of nodes with available pods: 0
Jul  4 08:54:02.474: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 08:54:03.477: INFO: Number of nodes with available pods: 0
Jul  4 08:54:03.477: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 08:54:04.478: INFO: Number of nodes with available pods: 0
Jul  4 08:54:04.478: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 08:54:05.478: INFO: Number of nodes with available pods: 0
Jul  4 08:54:05.478: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 08:54:06.476: INFO: Number of nodes with available pods: 0
Jul  4 08:54:06.476: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 08:54:07.484: INFO: Number of nodes with available pods: 1
Jul  4 08:54:07.484: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jul  4 08:54:07.529: INFO: Number of nodes with available pods: 1
Jul  4 08:54:07.529: INFO: Number of running nodes: 0, number of available pods: 1
Jul  4 08:54:08.533: INFO: Number of nodes with available pods: 0
Jul  4 08:54:08.533: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jul  4 08:54:08.838: INFO: Number of nodes with available pods: 0
Jul  4 08:54:08.838: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 08:54:09.908: INFO: Number of nodes with available pods: 0
Jul  4 08:54:09.908: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 08:54:10.842: INFO: Number of nodes with available pods: 0
Jul  4 08:54:10.842: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 08:54:11.842: INFO: Number of nodes with available pods: 0
Jul  4 08:54:11.842: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 08:54:12.841: INFO: Number of nodes with available pods: 0
Jul  4 08:54:12.841: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 08:54:13.842: INFO: Number of nodes with available pods: 0
Jul  4 08:54:13.842: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 08:54:14.854: INFO: Number of nodes with available pods: 0
Jul  4 08:54:14.854: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 08:54:15.840: INFO: Number of nodes with available pods: 0
Jul  4 08:54:15.840: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 08:54:17.011: INFO: Number of nodes with available pods: 0
Jul  4 08:54:17.011: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 08:54:17.841: INFO: Number of nodes with available pods: 0
Jul  4 08:54:17.841: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 08:54:18.841: INFO: Number of nodes with available pods: 0
Jul  4 08:54:18.841: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 08:54:19.840: INFO: Number of nodes with available pods: 1
Jul  4 08:54:19.840: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4572, will wait for the garbage collector to delete the pods
Jul  4 08:54:19.901: INFO: Deleting DaemonSet.extensions daemon-set took: 5.820504ms
Jul  4 08:54:20.001: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.178713ms
Jul  4 08:54:36.303: INFO: Number of nodes with available pods: 0
Jul  4 08:54:36.303: INFO: Number of running nodes: 0, number of available pods: 0
Jul  4 08:54:36.306: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4572/daemonsets","resourceVersion":"14114"},"items":null}

Jul  4 08:54:36.346: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4572/pods","resourceVersion":"14114"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:54:36.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4572" for this suite.

• [SLOW TEST:34.086 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":105,"skipped":1777,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:54:36.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul  4 08:55:03.672: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:55:03.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6746" for this suite.

• [SLOW TEST:27.365 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1831,"failed":0}
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:55:03.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jul  4 08:55:03.838: INFO: Waiting up to 5m0s for pod "downward-api-699ad073-5698-42a4-bcc9-3b498bf56bde" in namespace "downward-api-6049" to be "success or failure"
Jul  4 08:55:03.843: INFO: Pod "downward-api-699ad073-5698-42a4-bcc9-3b498bf56bde": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259868ms
Jul  4 08:55:05.856: INFO: Pod "downward-api-699ad073-5698-42a4-bcc9-3b498bf56bde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017484644s
Jul  4 08:55:07.859: INFO: Pod "downward-api-699ad073-5698-42a4-bcc9-3b498bf56bde": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020315313s
Jul  4 08:55:10.252: INFO: Pod "downward-api-699ad073-5698-42a4-bcc9-3b498bf56bde": Phase="Running", Reason="", readiness=true. Elapsed: 6.413081976s
Jul  4 08:55:12.256: INFO: Pod "downward-api-699ad073-5698-42a4-bcc9-3b498bf56bde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.4179785s
STEP: Saw pod success
Jul  4 08:55:12.257: INFO: Pod "downward-api-699ad073-5698-42a4-bcc9-3b498bf56bde" satisfied condition "success or failure"
Jul  4 08:55:12.260: INFO: Trying to get logs from node jerma-worker2 pod downward-api-699ad073-5698-42a4-bcc9-3b498bf56bde container dapi-container: 
STEP: delete the pod
Jul  4 08:55:12.881: INFO: Waiting for pod downward-api-699ad073-5698-42a4-bcc9-3b498bf56bde to disappear
Jul  4 08:55:13.248: INFO: Pod downward-api-699ad073-5698-42a4-bcc9-3b498bf56bde no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:55:13.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6049" for this suite.

• [SLOW TEST:10.027 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1840,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:55:13.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Jul  4 08:55:15.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7264'
Jul  4 08:55:15.677: INFO: stderr: ""
Jul  4 08:55:15.677: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jul  4 08:55:16.680: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:55:16.681: INFO: Found 0 / 1
Jul  4 08:55:17.680: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:55:17.680: INFO: Found 0 / 1
Jul  4 08:55:19.399: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:55:19.399: INFO: Found 0 / 1
Jul  4 08:55:20.706: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:55:20.706: INFO: Found 0 / 1
Jul  4 08:55:21.758: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:55:21.758: INFO: Found 0 / 1
Jul  4 08:55:22.833: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:55:22.833: INFO: Found 0 / 1
Jul  4 08:55:24.476: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:55:24.476: INFO: Found 0 / 1
Jul  4 08:55:24.748: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:55:24.748: INFO: Found 0 / 1
Jul  4 08:55:25.682: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:55:25.682: INFO: Found 0 / 1
Jul  4 08:55:28.090: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:55:28.090: INFO: Found 0 / 1
Jul  4 08:55:28.894: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:55:28.894: INFO: Found 0 / 1
Jul  4 08:55:29.681: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:55:29.681: INFO: Found 0 / 1
Jul  4 08:55:30.700: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:55:30.700: INFO: Found 0 / 1
Jul  4 08:55:31.681: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:55:31.681: INFO: Found 0 / 1
Jul  4 08:55:32.890: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:55:32.890: INFO: Found 0 / 1
Jul  4 08:55:33.680: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:55:33.680: INFO: Found 1 / 1
Jul  4 08:55:33.680: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jul  4 08:55:33.682: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:55:33.682: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul  4 08:55:33.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-znflz --namespace=kubectl-7264 -p {"metadata":{"annotations":{"x":"y"}}}'
Jul  4 08:55:33.779: INFO: stderr: ""
Jul  4 08:55:33.779: INFO: stdout: "pod/agnhost-master-znflz patched\n"
STEP: checking annotations
Jul  4 08:55:33.781: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  4 08:55:33.781: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:55:33.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7264" for this suite.

• [SLOW TEST:20.011 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1432
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":108,"skipped":1850,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:55:33.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-37a3f587-8ba9-46b2-bc82-245d36cdf1fc
STEP: Creating a pod to test consume secrets
Jul  4 08:55:33.887: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f0c57326-4d6b-4e02-9d6d-f5512dc3faaa" in namespace "projected-5121" to be "success or failure"
Jul  4 08:55:33.895: INFO: Pod "pod-projected-secrets-f0c57326-4d6b-4e02-9d6d-f5512dc3faaa": Phase="Pending", Reason="", readiness=false. Elapsed: 7.877627ms
Jul  4 08:55:35.898: INFO: Pod "pod-projected-secrets-f0c57326-4d6b-4e02-9d6d-f5512dc3faaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011313372s
Jul  4 08:55:37.901: INFO: Pod "pod-projected-secrets-f0c57326-4d6b-4e02-9d6d-f5512dc3faaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014007662s
STEP: Saw pod success
Jul  4 08:55:37.901: INFO: Pod "pod-projected-secrets-f0c57326-4d6b-4e02-9d6d-f5512dc3faaa" satisfied condition "success or failure"
Jul  4 08:55:37.903: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-f0c57326-4d6b-4e02-9d6d-f5512dc3faaa container projected-secret-volume-test: 
STEP: delete the pod
Jul  4 08:55:37.993: INFO: Waiting for pod pod-projected-secrets-f0c57326-4d6b-4e02-9d6d-f5512dc3faaa to disappear
Jul  4 08:55:37.995: INFO: Pod pod-projected-secrets-f0c57326-4d6b-4e02-9d6d-f5512dc3faaa no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:55:37.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5121" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1910,"failed":0}
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:55:38.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jul  4 08:55:38.112: INFO: Pod name pod-release: Found 0 pods out of 1
Jul  4 08:55:43.150: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:55:43.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9863" for this suite.

• [SLOW TEST:5.922 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":110,"skipped":1915,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:55:43.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Jul  4 08:55:44.318: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Jul  4 08:55:44.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7660'
Jul  4 08:55:44.852: INFO: stderr: ""
Jul  4 08:55:44.852: INFO: stdout: "service/agnhost-slave created\n"
Jul  4 08:55:44.852: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Jul  4 08:55:44.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7660'
Jul  4 08:55:45.847: INFO: stderr: ""
Jul  4 08:55:45.847: INFO: stdout: "service/agnhost-master created\n"
Jul  4 08:55:45.847: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jul  4 08:55:45.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7660'
Jul  4 08:55:46.489: INFO: stderr: ""
Jul  4 08:55:46.489: INFO: stdout: "service/frontend created\n"
Jul  4 08:55:46.489: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Jul  4 08:55:46.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7660'
Jul  4 08:55:47.074: INFO: stderr: ""
Jul  4 08:55:47.074: INFO: stdout: "deployment.apps/frontend created\n"
Jul  4 08:55:47.074: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jul  4 08:55:47.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7660'
Jul  4 08:55:47.443: INFO: stderr: ""
Jul  4 08:55:47.443: INFO: stdout: "deployment.apps/agnhost-master created\n"
Jul  4 08:55:47.444: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jul  4 08:55:47.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7660'
Jul  4 08:55:47.768: INFO: stderr: ""
Jul  4 08:55:47.768: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Jul  4 08:55:47.768: INFO: Waiting for all frontend pods to be Running.
Jul  4 08:56:07.819: INFO: Waiting for frontend to serve content.
Jul  4 08:56:08.465: INFO: Trying to add a new entry to the guestbook.
Jul  4 08:56:08.520: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jul  4 08:56:08.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7660'
Jul  4 08:56:08.773: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  4 08:56:08.773: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Jul  4 08:56:08.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7660'
Jul  4 08:56:08.926: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  4 08:56:08.926: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jul  4 08:56:08.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7660'
Jul  4 08:56:09.105: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  4 08:56:09.105: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul  4 08:56:09.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7660'
Jul  4 08:56:09.311: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  4 08:56:09.311: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul  4 08:56:09.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7660'
Jul  4 08:56:09.520: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  4 08:56:09.520: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jul  4 08:56:09.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7660'
Jul  4 08:56:10.588: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  4 08:56:10.588: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:56:10.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7660" for this suite.

• [SLOW TEST:27.261 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":111,"skipped":1951,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:56:11.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-eeb74455-4a4a-40e5-82b1-caf8ac748bc7
STEP: Creating a pod to test consume secrets
Jul  4 08:56:16.181: INFO: Waiting up to 5m0s for pod "pod-secrets-687b2837-1c5e-4c31-b020-bc85cb45aa4f" in namespace "secrets-5659" to be "success or failure"
Jul  4 08:56:16.659: INFO: Pod "pod-secrets-687b2837-1c5e-4c31-b020-bc85cb45aa4f": Phase="Pending", Reason="", readiness=false. Elapsed: 477.840048ms
Jul  4 08:56:20.104: INFO: Pod "pod-secrets-687b2837-1c5e-4c31-b020-bc85cb45aa4f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.922860097s
Jul  4 08:56:22.706: INFO: Pod "pod-secrets-687b2837-1c5e-4c31-b020-bc85cb45aa4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.525242095s
Jul  4 08:56:25.396: INFO: Pod "pod-secrets-687b2837-1c5e-4c31-b020-bc85cb45aa4f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.21475116s
Jul  4 08:56:28.695: INFO: Pod "pod-secrets-687b2837-1c5e-4c31-b020-bc85cb45aa4f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.513865475s
Jul  4 08:56:31.430: INFO: Pod "pod-secrets-687b2837-1c5e-4c31-b020-bc85cb45aa4f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.249164696s
Jul  4 08:56:33.979: INFO: Pod "pod-secrets-687b2837-1c5e-4c31-b020-bc85cb45aa4f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.797775346s
Jul  4 08:56:36.095: INFO: Pod "pod-secrets-687b2837-1c5e-4c31-b020-bc85cb45aa4f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.913904565s
Jul  4 08:56:38.191: INFO: Pod "pod-secrets-687b2837-1c5e-4c31-b020-bc85cb45aa4f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.010310191s
Jul  4 08:56:40.196: INFO: Pod "pod-secrets-687b2837-1c5e-4c31-b020-bc85cb45aa4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.01445531s
STEP: Saw pod success
Jul  4 08:56:40.196: INFO: Pod "pod-secrets-687b2837-1c5e-4c31-b020-bc85cb45aa4f" satisfied condition "success or failure"
Jul  4 08:56:40.199: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-687b2837-1c5e-4c31-b020-bc85cb45aa4f container secret-volume-test: 
STEP: delete the pod
Jul  4 08:56:40.757: INFO: Waiting for pod pod-secrets-687b2837-1c5e-4c31-b020-bc85cb45aa4f to disappear
Jul  4 08:56:40.792: INFO: Pod pod-secrets-687b2837-1c5e-4c31-b020-bc85cb45aa4f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:56:40.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5659" for this suite.

• [SLOW TEST:29.704 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1959,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:56:40.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-2
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2 to expose endpoints map[]
Jul  4 08:56:41.059: INFO: Get endpoints failed (5.020577ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jul  4 08:56:42.063: INFO: successfully validated that service multi-endpoint-test in namespace services-2 exposes endpoints map[] (1.008897994s elapsed)
STEP: Creating pod pod1 in namespace services-2
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2 to expose endpoints map[pod1:[100]]
Jul  4 08:56:46.821: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.751802052s elapsed, will retry)
Jul  4 08:56:48.832: INFO: successfully validated that service multi-endpoint-test in namespace services-2 exposes endpoints map[pod1:[100]] (6.763197909s elapsed)
STEP: Creating pod pod2 in namespace services-2
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2 to expose endpoints map[pod1:[100] pod2:[101]]
Jul  4 08:56:53.783: INFO: Unexpected endpoints: found map[3a9c030f-332f-4563-b982-6ba3dc5180c6:[100]], expected map[pod1:[100] pod2:[101]] (4.947448564s elapsed, will retry)
Jul  4 08:57:00.785: INFO: Unexpected endpoints: found map[3a9c030f-332f-4563-b982-6ba3dc5180c6:[100]], expected map[pod1:[100] pod2:[101]] (11.949333724s elapsed, will retry)
Jul  4 08:57:02.802: INFO: successfully validated that service multi-endpoint-test in namespace services-2 exposes endpoints map[pod1:[100] pod2:[101]] (13.965754876s elapsed)
STEP: Deleting pod pod1 in namespace services-2
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2 to expose endpoints map[pod2:[101]]
Jul  4 08:57:03.839: INFO: successfully validated that service multi-endpoint-test in namespace services-2 exposes endpoints map[pod2:[101]] (1.033719664s elapsed)
STEP: Deleting pod pod2 in namespace services-2
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2 to expose endpoints map[]
Jul  4 08:57:05.121: INFO: successfully validated that service multi-endpoint-test in namespace services-2 exposes endpoints map[] (1.276863716s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:57:05.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:25.087 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":113,"skipped":1962,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:57:05.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-4243
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul  4 08:57:06.279: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul  4 08:57:57.549: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.58 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4243 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 08:57:57.549: INFO: >>> kubeConfig: /root/.kube/config
I0704 08:57:57.581390       6 log.go:172] (0xc0015da840) (0xc001b01ea0) Create stream
I0704 08:57:57.581420       6 log.go:172] (0xc0015da840) (0xc001b01ea0) Stream added, broadcasting: 1
I0704 08:57:57.584273       6 log.go:172] (0xc0015da840) Reply frame received for 1
I0704 08:57:57.584309       6 log.go:172] (0xc0015da840) (0xc001a96000) Create stream
I0704 08:57:57.584331       6 log.go:172] (0xc0015da840) (0xc001a96000) Stream added, broadcasting: 3
I0704 08:57:57.586555       6 log.go:172] (0xc0015da840) Reply frame received for 3
I0704 08:57:57.586588       6 log.go:172] (0xc0015da840) (0xc001780000) Create stream
I0704 08:57:57.586601       6 log.go:172] (0xc0015da840) (0xc001780000) Stream added, broadcasting: 5
I0704 08:57:57.587196       6 log.go:172] (0xc0015da840) Reply frame received for 5
I0704 08:57:58.663387       6 log.go:172] (0xc0015da840) Data frame received for 5
I0704 08:57:58.663489       6 log.go:172] (0xc001780000) (5) Data frame handling
I0704 08:57:58.663517       6 log.go:172] (0xc0015da840) Data frame received for 3
I0704 08:57:58.663531       6 log.go:172] (0xc001a96000) (3) Data frame handling
I0704 08:57:58.663540       6 log.go:172] (0xc001a96000) (3) Data frame sent
I0704 08:57:58.663655       6 log.go:172] (0xc0015da840) Data frame received for 3
I0704 08:57:58.663668       6 log.go:172] (0xc001a96000) (3) Data frame handling
I0704 08:57:58.665215       6 log.go:172] (0xc0015da840) Data frame received for 1
I0704 08:57:58.665232       6 log.go:172] (0xc001b01ea0) (1) Data frame handling
I0704 08:57:58.665245       6 log.go:172] (0xc001b01ea0) (1) Data frame sent
I0704 08:57:58.665255       6 log.go:172] (0xc0015da840) (0xc001b01ea0) Stream removed, broadcasting: 1
I0704 08:57:58.665342       6 log.go:172] (0xc0015da840) (0xc001b01ea0) Stream removed, broadcasting: 1
I0704 08:57:58.665520       6 log.go:172] (0xc0015da840) (0xc001a96000) Stream removed, broadcasting: 3
I0704 08:57:58.665537       6 log.go:172] (0xc0015da840) (0xc001780000) Stream removed, broadcasting: 5
Jul  4 08:57:58.665: INFO: Found all expected endpoints: [netserver-0]
I0704 08:57:58.665588       6 log.go:172] (0xc0015da840) Go away received
Jul  4 08:57:58.708: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.71 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4243 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 08:57:58.708: INFO: >>> kubeConfig: /root/.kube/config
I0704 08:57:58.735937       6 log.go:172] (0xc0025fe4d0) (0xc001a96640) Create stream
I0704 08:57:58.735955       6 log.go:172] (0xc0025fe4d0) (0xc001a96640) Stream added, broadcasting: 1
I0704 08:57:58.738097       6 log.go:172] (0xc0025fe4d0) Reply frame received for 1
I0704 08:57:58.738145       6 log.go:172] (0xc0025fe4d0) (0xc001a96780) Create stream
I0704 08:57:58.738160       6 log.go:172] (0xc0025fe4d0) (0xc001a96780) Stream added, broadcasting: 3
I0704 08:57:58.739231       6 log.go:172] (0xc0025fe4d0) Reply frame received for 3
I0704 08:57:58.739270       6 log.go:172] (0xc0025fe4d0) (0xc001a96820) Create stream
I0704 08:57:58.739281       6 log.go:172] (0xc0025fe4d0) (0xc001a96820) Stream added, broadcasting: 5
I0704 08:57:58.740244       6 log.go:172] (0xc0025fe4d0) Reply frame received for 5
I0704 08:57:59.795577       6 log.go:172] (0xc0025fe4d0) Data frame received for 5
I0704 08:57:59.795628       6 log.go:172] (0xc001a96820) (5) Data frame handling
I0704 08:57:59.795660       6 log.go:172] (0xc0025fe4d0) Data frame received for 3
I0704 08:57:59.795674       6 log.go:172] (0xc001a96780) (3) Data frame handling
I0704 08:57:59.795689       6 log.go:172] (0xc001a96780) (3) Data frame sent
I0704 08:57:59.795704       6 log.go:172] (0xc0025fe4d0) Data frame received for 3
I0704 08:57:59.795726       6 log.go:172] (0xc001a96780) (3) Data frame handling
I0704 08:57:59.796801       6 log.go:172] (0xc0025fe4d0) Data frame received for 1
I0704 08:57:59.796825       6 log.go:172] (0xc001a96640) (1) Data frame handling
I0704 08:57:59.796843       6 log.go:172] (0xc001a96640) (1) Data frame sent
I0704 08:57:59.796859       6 log.go:172] (0xc0025fe4d0) (0xc001a96640) Stream removed, broadcasting: 1
I0704 08:57:59.796873       6 log.go:172] (0xc0025fe4d0) Go away received
I0704 08:57:59.797032       6 log.go:172] (0xc0025fe4d0) (0xc001a96640) Stream removed, broadcasting: 1
I0704 08:57:59.797068       6 log.go:172] (0xc0025fe4d0) (0xc001a96780) Stream removed, broadcasting: 3
I0704 08:57:59.797093       6 log.go:172] (0xc0025fe4d0) (0xc001a96820) Stream removed, broadcasting: 5
Jul  4 08:57:59.797: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:57:59.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4243" for this suite.

• [SLOW TEST:53.823 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1974,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:57:59.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0704 08:58:48.544166       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  4 08:58:48.544: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:58:48.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5330" for this suite.

• [SLOW TEST:48.746 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":115,"skipped":1988,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:58:48.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:58:57.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5100" for this suite.

• [SLOW TEST:9.345 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1995,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:58:57.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  4 08:58:58.916: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49a668d5-4d77-4f89-a87e-d18eceee011f" in namespace "projected-814" to be "success or failure"
Jul  4 08:58:58.927: INFO: Pod "downwardapi-volume-49a668d5-4d77-4f89-a87e-d18eceee011f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.778785ms
Jul  4 08:59:01.962: INFO: Pod "downwardapi-volume-49a668d5-4d77-4f89-a87e-d18eceee011f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.046087937s
Jul  4 08:59:04.638: INFO: Pod "downwardapi-volume-49a668d5-4d77-4f89-a87e-d18eceee011f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.721570911s
Jul  4 08:59:07.112: INFO: Pod "downwardapi-volume-49a668d5-4d77-4f89-a87e-d18eceee011f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.196050168s
Jul  4 08:59:09.336: INFO: Pod "downwardapi-volume-49a668d5-4d77-4f89-a87e-d18eceee011f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.419888194s
Jul  4 08:59:11.340: INFO: Pod "downwardapi-volume-49a668d5-4d77-4f89-a87e-d18eceee011f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.423735273s
Jul  4 08:59:13.343: INFO: Pod "downwardapi-volume-49a668d5-4d77-4f89-a87e-d18eceee011f": Phase="Running", Reason="", readiness=true. Elapsed: 14.426614559s
Jul  4 08:59:15.347: INFO: Pod "downwardapi-volume-49a668d5-4d77-4f89-a87e-d18eceee011f": Phase="Running", Reason="", readiness=true. Elapsed: 16.430820792s
Jul  4 08:59:17.368: INFO: Pod "downwardapi-volume-49a668d5-4d77-4f89-a87e-d18eceee011f": Phase="Running", Reason="", readiness=true. Elapsed: 18.452109313s
Jul  4 08:59:22.806: INFO: Pod "downwardapi-volume-49a668d5-4d77-4f89-a87e-d18eceee011f": Phase="Running", Reason="", readiness=true. Elapsed: 23.890301588s
Jul  4 08:59:24.809: INFO: Pod "downwardapi-volume-49a668d5-4d77-4f89-a87e-d18eceee011f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.89349155s
STEP: Saw pod success
Jul  4 08:59:24.810: INFO: Pod "downwardapi-volume-49a668d5-4d77-4f89-a87e-d18eceee011f" satisfied condition "success or failure"
Jul  4 08:59:24.812: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-49a668d5-4d77-4f89-a87e-d18eceee011f container client-container: 
STEP: delete the pod
Jul  4 08:59:25.351: INFO: Waiting for pod downwardapi-volume-49a668d5-4d77-4f89-a87e-d18eceee011f to disappear
Jul  4 08:59:25.871: INFO: Pod downwardapi-volume-49a668d5-4d77-4f89-a87e-d18eceee011f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:59:25.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-814" for this suite.

• [SLOW TEST:28.042 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1996,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:59:25.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5405.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5405.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5405.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5405.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5405.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5405.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  4 08:59:33.510: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-5405/dns-test-61c8e50b-56c5-443e-af73-5365290a8d3d: the server could not find the requested resource (get pods dns-test-61c8e50b-56c5-443e-af73-5365290a8d3d)
Jul  4 08:59:33.513: INFO: Unable to read jessie_udp@PodARecord from pod dns-5405/dns-test-61c8e50b-56c5-443e-af73-5365290a8d3d: the server could not find the requested resource (get pods dns-test-61c8e50b-56c5-443e-af73-5365290a8d3d)
Jul  4 08:59:33.515: INFO: Unable to read jessie_tcp@PodARecord from pod dns-5405/dns-test-61c8e50b-56c5-443e-af73-5365290a8d3d: the server could not find the requested resource (get pods dns-test-61c8e50b-56c5-443e-af73-5365290a8d3d)
Jul  4 08:59:33.515: INFO: Lookups using dns-5405/dns-test-61c8e50b-56c5-443e-af73-5365290a8d3d failed for: [jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  4 08:59:38.532: INFO: DNS probes using dns-5405/dns-test-61c8e50b-56c5-443e-af73-5365290a8d3d succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:59:38.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5405" for this suite.

• [SLOW TEST:13.083 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":118,"skipped":2016,"failed":0}
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:59:39.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 08:59:39.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 08:59:43.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9116" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":2016,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 08:59:43.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 08:59:43.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-7880
I0704 08:59:43.229637       6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7880, replica count: 1
I0704 08:59:44.279990       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0704 08:59:45.280270       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0704 08:59:46.280499       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0704 08:59:47.280717       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0704 08:59:48.280902       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  4 08:59:48.407: INFO: Created: latency-svc-8n9vt
Jul  4 08:59:48.419: INFO: Got endpoints: latency-svc-8n9vt [38.168129ms]
Jul  4 08:59:48.469: INFO: Created: latency-svc-jrcbp
Jul  4 08:59:48.477: INFO: Got endpoints: latency-svc-jrcbp [58.570424ms]
Jul  4 08:59:48.500: INFO: Created: latency-svc-z2ltf
Jul  4 08:59:48.511: INFO: Got endpoints: latency-svc-z2ltf [92.307562ms]
Jul  4 08:59:48.533: INFO: Created: latency-svc-cggrt
Jul  4 08:59:48.602: INFO: Got endpoints: latency-svc-cggrt [183.001092ms]
Jul  4 08:59:48.631: INFO: Created: latency-svc-rsmd9
Jul  4 08:59:48.647: INFO: Got endpoints: latency-svc-rsmd9 [228.511474ms]
Jul  4 08:59:48.674: INFO: Created: latency-svc-w8cqx
Jul  4 08:59:48.689: INFO: Got endpoints: latency-svc-w8cqx [269.994034ms]
Jul  4 08:59:48.770: INFO: Created: latency-svc-lm4nw
Jul  4 08:59:48.773: INFO: Got endpoints: latency-svc-lm4nw [354.307859ms]
Jul  4 08:59:48.802: INFO: Created: latency-svc-g85pb
Jul  4 08:59:48.810: INFO: Got endpoints: latency-svc-g85pb [390.444535ms]
Jul  4 08:59:48.857: INFO: Created: latency-svc-nw9xc
Jul  4 08:59:48.920: INFO: Got endpoints: latency-svc-nw9xc [500.43873ms]
Jul  4 08:59:48.921: INFO: Created: latency-svc-qqn8s
Jul  4 08:59:48.941: INFO: Got endpoints: latency-svc-qqn8s [521.954658ms]
Jul  4 08:59:48.962: INFO: Created: latency-svc-btfkd
Jul  4 08:59:48.977: INFO: Got endpoints: latency-svc-btfkd [557.899246ms]
Jul  4 08:59:48.998: INFO: Created: latency-svc-czhvw
Jul  4 08:59:49.014: INFO: Got endpoints: latency-svc-czhvw [594.855406ms]
Jul  4 08:59:49.069: INFO: Created: latency-svc-xx6rr
Jul  4 08:59:49.073: INFO: Got endpoints: latency-svc-xx6rr [654.30477ms]
Jul  4 08:59:49.099: INFO: Created: latency-svc-d9gdw
Jul  4 08:59:49.116: INFO: Got endpoints: latency-svc-d9gdw [696.584084ms]
Jul  4 08:59:49.150: INFO: Created: latency-svc-zlnxl
Jul  4 08:59:49.164: INFO: Got endpoints: latency-svc-zlnxl [744.618073ms]
Jul  4 08:59:49.219: INFO: Created: latency-svc-6vl67
Jul  4 08:59:49.221: INFO: Got endpoints: latency-svc-6vl67 [802.291159ms]
Jul  4 08:59:49.252: INFO: Created: latency-svc-2clg5
Jul  4 08:59:49.288: INFO: Got endpoints: latency-svc-2clg5 [810.140779ms]
Jul  4 08:59:49.318: INFO: Created: latency-svc-qrvbg
Jul  4 08:59:49.392: INFO: Got endpoints: latency-svc-qrvbg [881.046597ms]
Jul  4 08:59:49.395: INFO: Created: latency-svc-q87r4
Jul  4 08:59:49.406: INFO: Got endpoints: latency-svc-q87r4 [803.684545ms]
Jul  4 08:59:49.444: INFO: Created: latency-svc-wwz74
Jul  4 08:59:49.460: INFO: Got endpoints: latency-svc-wwz74 [812.46668ms]
Jul  4 08:59:49.479: INFO: Created: latency-svc-kfnn2
Jul  4 08:59:49.560: INFO: Got endpoints: latency-svc-kfnn2 [870.638561ms]
Jul  4 08:59:49.562: INFO: Created: latency-svc-6p6rz
Jul  4 08:59:49.567: INFO: Got endpoints: latency-svc-6p6rz [793.658398ms]
Jul  4 08:59:49.604: INFO: Created: latency-svc-qspl4
Jul  4 08:59:49.628: INFO: Got endpoints: latency-svc-qspl4 [817.984609ms]
Jul  4 08:59:49.650: INFO: Created: latency-svc-66zbw
Jul  4 08:59:49.704: INFO: Got endpoints: latency-svc-66zbw [784.193226ms]
Jul  4 08:59:49.730: INFO: Created: latency-svc-74bsl
Jul  4 08:59:49.742: INFO: Got endpoints: latency-svc-74bsl [801.190935ms]
Jul  4 08:59:49.762: INFO: Created: latency-svc-2gvsd
Jul  4 08:59:49.778: INFO: Got endpoints: latency-svc-2gvsd [800.996993ms]
Jul  4 08:59:49.801: INFO: Created: latency-svc-584zm
Jul  4 08:59:49.865: INFO: Got endpoints: latency-svc-584zm [851.61926ms]
Jul  4 08:59:49.868: INFO: Created: latency-svc-cvbnn
Jul  4 08:59:49.874: INFO: Got endpoints: latency-svc-cvbnn [800.814618ms]
Jul  4 08:59:49.902: INFO: Created: latency-svc-2vp6b
Jul  4 08:59:49.917: INFO: Got endpoints: latency-svc-2vp6b [801.419063ms]
Jul  4 08:59:49.944: INFO: Created: latency-svc-tzmll
Jul  4 08:59:49.965: INFO: Got endpoints: latency-svc-tzmll [801.548918ms]
Jul  4 08:59:50.028: INFO: Created: latency-svc-dp7qj
Jul  4 08:59:50.037: INFO: Got endpoints: latency-svc-dp7qj [815.865965ms]
Jul  4 08:59:50.100: INFO: Created: latency-svc-krsts
Jul  4 08:59:50.115: INFO: Got endpoints: latency-svc-krsts [827.592691ms]
Jul  4 08:59:50.214: INFO: Created: latency-svc-x48fp
Jul  4 08:59:50.221: INFO: Got endpoints: latency-svc-x48fp [828.570029ms]
Jul  4 08:59:50.251: INFO: Created: latency-svc-wb28w
Jul  4 08:59:50.266: INFO: Got endpoints: latency-svc-wb28w [860.180068ms]
Jul  4 08:59:50.295: INFO: Created: latency-svc-fhzjh
Jul  4 08:59:50.368: INFO: Got endpoints: latency-svc-fhzjh [908.366669ms]
Jul  4 08:59:50.371: INFO: Created: latency-svc-v524l
Jul  4 08:59:50.374: INFO: Got endpoints: latency-svc-v524l [814.252293ms]
Jul  4 08:59:50.402: INFO: Created: latency-svc-mgntz
Jul  4 08:59:50.416: INFO: Got endpoints: latency-svc-mgntz [849.237343ms]
Jul  4 08:59:50.450: INFO: Created: latency-svc-tdknc
Jul  4 08:59:50.464: INFO: Got endpoints: latency-svc-tdknc [836.638026ms]
Jul  4 08:59:50.512: INFO: Created: latency-svc-xmccs
Jul  4 08:59:50.519: INFO: Got endpoints: latency-svc-xmccs [815.018871ms]
Jul  4 08:59:50.542: INFO: Created: latency-svc-wtgm6
Jul  4 08:59:50.561: INFO: Got endpoints: latency-svc-wtgm6 [818.902879ms]
Jul  4 08:59:50.595: INFO: Created: latency-svc-9hht7
Jul  4 08:59:50.609: INFO: Got endpoints: latency-svc-9hht7 [831.310991ms]
Jul  4 08:59:50.650: INFO: Created: latency-svc-5tkk8
Jul  4 08:59:50.657: INFO: Got endpoints: latency-svc-5tkk8 [791.90646ms]
Jul  4 08:59:50.680: INFO: Created: latency-svc-wcss2
Jul  4 08:59:50.687: INFO: Got endpoints: latency-svc-wcss2 [813.199667ms]
Jul  4 08:59:50.712: INFO: Created: latency-svc-vs26z
Jul  4 08:59:50.740: INFO: Got endpoints: latency-svc-vs26z [822.463806ms]
Jul  4 08:59:50.800: INFO: Created: latency-svc-8qlsw
Jul  4 08:59:50.824: INFO: Got endpoints: latency-svc-8qlsw [858.843473ms]
Jul  4 08:59:50.861: INFO: Created: latency-svc-v7rvz
Jul  4 08:59:50.887: INFO: Got endpoints: latency-svc-v7rvz [849.931748ms]
Jul  4 08:59:50.967: INFO: Created: latency-svc-4rk55
Jul  4 08:59:50.971: INFO: Got endpoints: latency-svc-4rk55 [855.436826ms]
Jul  4 08:59:51.001: INFO: Created: latency-svc-8t9xz
Jul  4 08:59:51.019: INFO: Got endpoints: latency-svc-8t9xz [798.045607ms]
Jul  4 08:59:51.039: INFO: Created: latency-svc-7lwjd
Jul  4 08:59:51.055: INFO: Got endpoints: latency-svc-7lwjd [789.070928ms]
Jul  4 08:59:51.154: INFO: Created: latency-svc-p4prd
Jul  4 08:59:51.158: INFO: Got endpoints: latency-svc-p4prd [789.190048ms]
Jul  4 08:59:51.188: INFO: Created: latency-svc-x7jvs
Jul  4 08:59:51.206: INFO: Got endpoints: latency-svc-x7jvs [831.519147ms]
Jul  4 08:59:51.230: INFO: Created: latency-svc-t2nnq
Jul  4 08:59:51.247: INFO: Got endpoints: latency-svc-t2nnq [831.080737ms]
Jul  4 08:59:51.309: INFO: Created: latency-svc-78q9n
Jul  4 08:59:51.314: INFO: Got endpoints: latency-svc-78q9n [849.831995ms]
Jul  4 08:59:51.352: INFO: Created: latency-svc-8vsqs
Jul  4 08:59:51.374: INFO: Got endpoints: latency-svc-8vsqs [855.184672ms]
Jul  4 08:59:51.478: INFO: Created: latency-svc-qxm6w
Jul  4 08:59:51.484: INFO: Got endpoints: latency-svc-qxm6w [923.014408ms]
Jul  4 08:59:51.516: INFO: Created: latency-svc-v6j46
Jul  4 08:59:51.540: INFO: Got endpoints: latency-svc-v6j46 [930.384299ms]
Jul  4 08:59:51.569: INFO: Created: latency-svc-vkkt8
Jul  4 08:59:51.620: INFO: Got endpoints: latency-svc-vkkt8 [962.285595ms]
Jul  4 08:59:51.647: INFO: Created: latency-svc-qw8pj
Jul  4 08:59:51.683: INFO: Got endpoints: latency-svc-qw8pj [995.827955ms]
Jul  4 08:59:51.782: INFO: Created: latency-svc-65lqt
Jul  4 08:59:51.785: INFO: Got endpoints: latency-svc-65lqt [1.045180574s]
Jul  4 08:59:51.837: INFO: Created: latency-svc-zdb2h
Jul  4 08:59:51.855: INFO: Got endpoints: latency-svc-zdb2h [1.030646814s]
Jul  4 08:59:51.879: INFO: Created: latency-svc-wf727
Jul  4 08:59:51.937: INFO: Got endpoints: latency-svc-wf727 [1.050213366s]
Jul  4 08:59:51.940: INFO: Created: latency-svc-fbdvk
Jul  4 08:59:51.945: INFO: Got endpoints: latency-svc-fbdvk [974.333961ms]
Jul  4 08:59:51.967: INFO: Created: latency-svc-npncs
Jul  4 08:59:51.982: INFO: Got endpoints: latency-svc-npncs [962.419198ms]
Jul  4 08:59:52.004: INFO: Created: latency-svc-vnt64
Jul  4 08:59:52.012: INFO: Got endpoints: latency-svc-vnt64 [956.651742ms]
Jul  4 08:59:52.035: INFO: Created: latency-svc-t5sxs
Jul  4 08:59:52.087: INFO: Got endpoints: latency-svc-t5sxs [929.153703ms]
Jul  4 08:59:52.089: INFO: Created: latency-svc-95kw9
Jul  4 08:59:52.097: INFO: Got endpoints: latency-svc-95kw9 [891.445717ms]
Jul  4 08:59:52.130: INFO: Created: latency-svc-7wpqw
Jul  4 08:59:52.138: INFO: Got endpoints: latency-svc-7wpqw [890.943639ms]
Jul  4 08:59:52.174: INFO: Created: latency-svc-r9hxt
Jul  4 08:59:52.181: INFO: Got endpoints: latency-svc-r9hxt [866.34276ms]
Jul  4 08:59:52.231: INFO: Created: latency-svc-7hwkm
Jul  4 08:59:52.235: INFO: Got endpoints: latency-svc-7hwkm [860.600174ms]
Jul  4 08:59:52.256: INFO: Created: latency-svc-cnzrr
Jul  4 08:59:52.265: INFO: Got endpoints: latency-svc-cnzrr [781.191985ms]
Jul  4 08:59:52.286: INFO: Created: latency-svc-ktpzw
Jul  4 08:59:52.296: INFO: Got endpoints: latency-svc-ktpzw [755.781206ms]
Jul  4 08:59:52.317: INFO: Created: latency-svc-7zqr2
Jul  4 08:59:52.327: INFO: Got endpoints: latency-svc-7zqr2 [706.830853ms]
Jul  4 08:59:52.368: INFO: Created: latency-svc-hzt67
Jul  4 08:59:52.374: INFO: Got endpoints: latency-svc-hzt67 [690.655497ms]
Jul  4 08:59:52.408: INFO: Created: latency-svc-ngks8
Jul  4 08:59:52.416: INFO: Got endpoints: latency-svc-ngks8 [631.548103ms]
Jul  4 08:59:52.561: INFO: Created: latency-svc-zklr7
Jul  4 08:59:52.564: INFO: Got endpoints: latency-svc-zklr7 [708.929626ms]
Jul  4 08:59:52.601: INFO: Created: latency-svc-spzq9
Jul  4 08:59:52.621: INFO: Got endpoints: latency-svc-spzq9 [683.392878ms]
Jul  4 08:59:52.644: INFO: Created: latency-svc-wnndx
Jul  4 08:59:52.716: INFO: Got endpoints: latency-svc-wnndx [770.874498ms]
Jul  4 08:59:52.721: INFO: Created: latency-svc-66g7f
Jul  4 08:59:52.754: INFO: Got endpoints: latency-svc-66g7f [772.679302ms]
Jul  4 08:59:52.754: INFO: Created: latency-svc-7j4ds
Jul  4 08:59:52.777: INFO: Got endpoints: latency-svc-7j4ds [765.608954ms]
Jul  4 08:59:52.878: INFO: Created: latency-svc-gg5ft
Jul  4 08:59:52.885: INFO: Got endpoints: latency-svc-gg5ft [798.182959ms]
Jul  4 08:59:52.906: INFO: Created: latency-svc-2g29g
Jul  4 08:59:52.922: INFO: Got endpoints: latency-svc-2g29g [824.442835ms]
Jul  4 08:59:52.941: INFO: Created: latency-svc-7rw46
Jul  4 08:59:52.958: INFO: Got endpoints: latency-svc-7rw46 [819.155032ms]
Jul  4 08:59:53.046: INFO: Created: latency-svc-qscsj
Jul  4 08:59:53.049: INFO: Got endpoints: latency-svc-qscsj [868.077762ms]
Jul  4 08:59:53.074: INFO: Created: latency-svc-nzzs8
Jul  4 08:59:53.102: INFO: Got endpoints: latency-svc-nzzs8 [867.544176ms]
Jul  4 08:59:53.135: INFO: Created: latency-svc-mln5j
Jul  4 08:59:53.219: INFO: Got endpoints: latency-svc-mln5j [953.639722ms]
Jul  4 08:59:53.220: INFO: Created: latency-svc-s5gmx
Jul  4 08:59:53.228: INFO: Got endpoints: latency-svc-s5gmx [932.844385ms]
Jul  4 08:59:53.264: INFO: Created: latency-svc-5zs8n
Jul  4 08:59:53.271: INFO: Got endpoints: latency-svc-5zs8n [944.103358ms]
Jul  4 08:59:53.494: INFO: Created: latency-svc-bt9dr
Jul  4 08:59:53.503: INFO: Got endpoints: latency-svc-bt9dr [1.129031397s]
Jul  4 08:59:53.544: INFO: Created: latency-svc-8hsqc
Jul  4 08:59:53.580: INFO: Got endpoints: latency-svc-8hsqc [1.163367438s]
Jul  4 08:59:53.652: INFO: Created: latency-svc-x49nb
Jul  4 08:59:53.662: INFO: Got endpoints: latency-svc-x49nb [1.097994007s]
Jul  4 08:59:53.701: INFO: Created: latency-svc-rzv6v
Jul  4 08:59:53.718: INFO: Got endpoints: latency-svc-rzv6v [1.097210158s]
Jul  4 08:59:53.818: INFO: Created: latency-svc-pt9c8
Jul  4 08:59:53.820: INFO: Got endpoints: latency-svc-pt9c8 [1.104243517s]
Jul  4 08:59:53.850: INFO: Created: latency-svc-2ckxq
Jul  4 08:59:53.866: INFO: Got endpoints: latency-svc-2ckxq [1.111466955s]
Jul  4 08:59:53.893: INFO: Created: latency-svc-dv2hl
Jul  4 08:59:53.908: INFO: Got endpoints: latency-svc-dv2hl [1.130479584s]
Jul  4 08:59:54.015: INFO: Created: latency-svc-4bmnm
Jul  4 08:59:54.018: INFO: Got endpoints: latency-svc-4bmnm [1.133258042s]
Jul  4 08:59:54.075: INFO: Created: latency-svc-9n8xj
Jul  4 08:59:54.094: INFO: Got endpoints: latency-svc-9n8xj [1.172742055s]
Jul  4 08:59:54.183: INFO: Created: latency-svc-9mgsz
Jul  4 08:59:54.190: INFO: Got endpoints: latency-svc-9mgsz [1.232283494s]
Jul  4 08:59:54.214: INFO: Created: latency-svc-pm5pb
Jul  4 08:59:54.232: INFO: Got endpoints: latency-svc-pm5pb [1.18339336s]
Jul  4 08:59:54.345: INFO: Created: latency-svc-c7v86
Jul  4 08:59:54.350: INFO: Got endpoints: latency-svc-c7v86 [1.247218322s]
Jul  4 08:59:54.377: INFO: Created: latency-svc-qjpx9
Jul  4 08:59:54.395: INFO: Got endpoints: latency-svc-qjpx9 [1.175893226s]
Jul  4 08:59:54.418: INFO: Created: latency-svc-kfp94
Jul  4 08:59:54.425: INFO: Got endpoints: latency-svc-kfp94 [1.196298705s]
Jul  4 08:59:54.512: INFO: Created: latency-svc-77f24
Jul  4 08:59:54.514: INFO: Got endpoints: latency-svc-77f24 [1.243237582s]
Jul  4 08:59:54.562: INFO: Created: latency-svc-x4xvq
Jul  4 08:59:54.611: INFO: Got endpoints: latency-svc-x4xvq [1.108320495s]
Jul  4 08:59:54.656: INFO: Created: latency-svc-p974k
Jul  4 08:59:54.661: INFO: Got endpoints: latency-svc-p974k [1.080945833s]
Jul  4 08:59:54.690: INFO: Created: latency-svc-4vdd2
Jul  4 08:59:54.703: INFO: Got endpoints: latency-svc-4vdd2 [1.040669023s]
Jul  4 08:59:54.731: INFO: Created: latency-svc-wsjnn
Jul  4 08:59:54.750: INFO: Got endpoints: latency-svc-wsjnn [1.032270777s]
Jul  4 08:59:54.824: INFO: Created: latency-svc-mp7jk
Jul  4 08:59:54.826: INFO: Got endpoints: latency-svc-mp7jk [1.005823057s]
Jul  4 08:59:54.865: INFO: Created: latency-svc-dhz7w
Jul  4 08:59:54.883: INFO: Got endpoints: latency-svc-dhz7w [1.016602487s]
Jul  4 08:59:54.906: INFO: Created: latency-svc-kkklw
Jul  4 08:59:54.979: INFO: Got endpoints: latency-svc-kkklw [1.071231517s]
Jul  4 08:59:55.003: INFO: Created: latency-svc-wm47w
Jul  4 08:59:55.021: INFO: Got endpoints: latency-svc-wm47w [1.002718491s]
Jul  4 08:59:55.044: INFO: Created: latency-svc-rfxsr
Jul  4 08:59:55.063: INFO: Got endpoints: latency-svc-rfxsr [968.9392ms]
Jul  4 08:59:55.159: INFO: Created: latency-svc-sjx2z
Jul  4 08:59:55.162: INFO: Got endpoints: latency-svc-sjx2z [972.32403ms]
Jul  4 08:59:55.221: INFO: Created: latency-svc-jvv88
Jul  4 08:59:55.238: INFO: Got endpoints: latency-svc-jvv88 [1.005298629s]
Jul  4 08:59:55.333: INFO: Created: latency-svc-hgqrd
Jul  4 08:59:55.336: INFO: Got endpoints: latency-svc-hgqrd [986.146648ms]
Jul  4 08:59:55.359: INFO: Created: latency-svc-hzgrx
Jul  4 08:59:55.394: INFO: Got endpoints: latency-svc-hzgrx [998.665082ms]
Jul  4 08:59:55.531: INFO: Created: latency-svc-xs787
Jul  4 08:59:55.534: INFO: Got endpoints: latency-svc-xs787 [1.109300751s]
Jul  4 08:59:55.564: INFO: Created: latency-svc-xvvhl
Jul  4 08:59:55.581: INFO: Got endpoints: latency-svc-xvvhl [1.067140001s]
Jul  4 08:59:55.617: INFO: Created: latency-svc-blfww
Jul  4 08:59:55.710: INFO: Got endpoints: latency-svc-blfww [1.098527034s]
Jul  4 08:59:55.744: INFO: Created: latency-svc-mh8vp
Jul  4 08:59:55.760: INFO: Got endpoints: latency-svc-mh8vp [1.099515979s]
Jul  4 08:59:55.795: INFO: Created: latency-svc-2dsvt
Jul  4 08:59:55.809: INFO: Got endpoints: latency-svc-2dsvt [1.106109587s]
Jul  4 08:59:55.854: INFO: Created: latency-svc-kfm5g
Jul  4 08:59:55.870: INFO: Got endpoints: latency-svc-kfm5g [1.119648909s]
Jul  4 08:59:55.902: INFO: Created: latency-svc-bv2cl
Jul  4 08:59:55.918: INFO: Got endpoints: latency-svc-bv2cl [1.091341954s]
Jul  4 08:59:55.944: INFO: Created: latency-svc-krssq
Jul  4 08:59:55.997: INFO: Got endpoints: latency-svc-krssq [1.114172736s]
Jul  4 08:59:56.026: INFO: Created: latency-svc-qkc4s
Jul  4 08:59:56.045: INFO: Got endpoints: latency-svc-qkc4s [1.065402484s]
Jul  4 08:59:56.190: INFO: Created: latency-svc-fx7l2
Jul  4 08:59:56.218: INFO: Got endpoints: latency-svc-fx7l2 [1.197055989s]
Jul  4 08:59:56.238: INFO: Created: latency-svc-g8hx5
Jul  4 08:59:56.254: INFO: Got endpoints: latency-svc-g8hx5 [1.191071497s]
Jul  4 08:59:56.279: INFO: Created: latency-svc-284qf
Jul  4 08:59:56.356: INFO: Got endpoints: latency-svc-284qf [1.193866251s]
Jul  4 08:59:56.375: INFO: Created: latency-svc-bh4b4
Jul  4 08:59:56.392: INFO: Got endpoints: latency-svc-bh4b4 [1.154495722s]
Jul  4 08:59:56.419: INFO: Created: latency-svc-cv8hz
Jul  4 08:59:56.435: INFO: Got endpoints: latency-svc-cv8hz [1.098918447s]
Jul  4 08:59:56.590: INFO: Created: latency-svc-dff2z
Jul  4 08:59:56.594: INFO: Got endpoints: latency-svc-dff2z [1.199979848s]
Jul  4 08:59:56.786: INFO: Created: latency-svc-9w6r8
Jul  4 08:59:56.794: INFO: Got endpoints: latency-svc-9w6r8 [1.26025249s]
Jul  4 08:59:56.823: INFO: Created: latency-svc-gfspt
Jul  4 08:59:56.843: INFO: Got endpoints: latency-svc-gfspt [1.26145751s]
Jul  4 08:59:56.956: INFO: Created: latency-svc-kwg4t
Jul  4 08:59:56.958: INFO: Got endpoints: latency-svc-kwg4t [1.248446552s]
Jul  4 08:59:56.991: INFO: Created: latency-svc-pnzg2
Jul  4 08:59:57.021: INFO: Got endpoints: latency-svc-pnzg2 [1.260745804s]
Jul  4 08:59:57.052: INFO: Created: latency-svc-6jn4v
Jul  4 08:59:57.123: INFO: Got endpoints: latency-svc-6jn4v [1.313965844s]
Jul  4 08:59:57.124: INFO: Created: latency-svc-c4d4c
Jul  4 08:59:57.131: INFO: Got endpoints: latency-svc-c4d4c [1.260930503s]
Jul  4 08:59:57.166: INFO: Created: latency-svc-jx9vb
Jul  4 08:59:57.180: INFO: Got endpoints: latency-svc-jx9vb [1.262591626s]
Jul  4 08:59:57.222: INFO: Created: latency-svc-7rsrd
Jul  4 08:59:57.279: INFO: Got endpoints: latency-svc-7rsrd [1.281656109s]
Jul  4 08:59:57.280: INFO: Created: latency-svc-nxh75
Jul  4 08:59:57.282: INFO: Got endpoints: latency-svc-nxh75 [1.237562665s]
Jul  4 08:59:57.329: INFO: Created: latency-svc-bskrq
Jul  4 08:59:57.355: INFO: Got endpoints: latency-svc-bskrq [1.136335295s]
Jul  4 08:59:57.465: INFO: Created: latency-svc-ds6p5
Jul  4 08:59:57.474: INFO: Got endpoints: latency-svc-ds6p5 [1.21993056s]
Jul  4 08:59:57.511: INFO: Created: latency-svc-2tdmx
Jul  4 08:59:57.535: INFO: Got endpoints: latency-svc-2tdmx [1.178263563s]
Jul  4 08:59:57.632: INFO: Created: latency-svc-2t4qd
Jul  4 08:59:57.660: INFO: Got endpoints: latency-svc-2t4qd [1.267836991s]
Jul  4 08:59:57.661: INFO: Created: latency-svc-zttvw
Jul  4 08:59:57.699: INFO: Got endpoints: latency-svc-zttvw [1.263698586s]
Jul  4 08:59:57.793: INFO: Created: latency-svc-mr6sz
Jul  4 08:59:57.796: INFO: Got endpoints: latency-svc-mr6sz [1.20253058s]
Jul  4 08:59:57.830: INFO: Created: latency-svc-spd8d
Jul  4 08:59:57.841: INFO: Got endpoints: latency-svc-spd8d [1.047037346s]
Jul  4 08:59:57.874: INFO: Created: latency-svc-6lbt5
Jul  4 08:59:57.955: INFO: Got endpoints: latency-svc-6lbt5 [1.112582673s]
Jul  4 08:59:57.958: INFO: Created: latency-svc-42m7h
Jul  4 08:59:57.968: INFO: Got endpoints: latency-svc-42m7h [1.009077223s]
Jul  4 08:59:57.996: INFO: Created: latency-svc-rp48z
Jul  4 08:59:58.004: INFO: Got endpoints: latency-svc-rp48z [982.852307ms]
Jul  4 08:59:58.033: INFO: Created: latency-svc-6khvp
Jul  4 08:59:58.047: INFO: Got endpoints: latency-svc-6khvp [923.638425ms]
Jul  4 08:59:58.099: INFO: Created: latency-svc-jk6d4
Jul  4 08:59:58.106: INFO: Got endpoints: latency-svc-jk6d4 [975.305083ms]
Jul  4 08:59:58.140: INFO: Created: latency-svc-8kkh5
Jul  4 08:59:58.149: INFO: Got endpoints: latency-svc-8kkh5 [968.464391ms]
Jul  4 08:59:58.178: INFO: Created: latency-svc-t7jkt
Jul  4 08:59:58.278: INFO: Got endpoints: latency-svc-t7jkt [999.79149ms]
Jul  4 08:59:58.282: INFO: Created: latency-svc-r4hc8
Jul  4 08:59:58.287: INFO: Got endpoints: latency-svc-r4hc8 [1.004503991s]
Jul  4 08:59:58.340: INFO: Created: latency-svc-g55pd
Jul  4 08:59:58.353: INFO: Got endpoints: latency-svc-g55pd [998.658894ms]
Jul  4 08:59:58.447: INFO: Created: latency-svc-q6txr
Jul  4 08:59:58.478: INFO: Got endpoints: latency-svc-q6txr [1.003932357s]
Jul  4 08:59:58.480: INFO: Created: latency-svc-42f2l
Jul  4 08:59:58.498: INFO: Got endpoints: latency-svc-42f2l [963.364062ms]
Jul  4 08:59:58.519: INFO: Created: latency-svc-5qtsk
Jul  4 08:59:58.534: INFO: Got endpoints: latency-svc-5qtsk [873.949983ms]
Jul  4 08:59:58.632: INFO: Created: latency-svc-k9bb9
Jul  4 08:59:58.635: INFO: Got endpoints: latency-svc-k9bb9 [936.797724ms]
Jul  4 08:59:58.706: INFO: Created: latency-svc-xwvg8
Jul  4 08:59:58.720: INFO: Got endpoints: latency-svc-xwvg8 [924.107893ms]
Jul  4 08:59:58.794: INFO: Created: latency-svc-d98z5
Jul  4 08:59:58.799: INFO: Got endpoints: latency-svc-d98z5 [957.113094ms]
Jul  4 08:59:58.821: INFO: Created: latency-svc-cqwff
Jul  4 08:59:58.835: INFO: Got endpoints: latency-svc-cqwff [879.106512ms]
Jul  4 08:59:58.875: INFO: Created: latency-svc-hsbkv
Jul  4 08:59:58.975: INFO: Got endpoints: latency-svc-hsbkv [1.007660973s]
Jul  4 08:59:58.981: INFO: Created: latency-svc-xcgk6
Jul  4 08:59:58.991: INFO: Got endpoints: latency-svc-xcgk6 [986.835414ms]
Jul  4 08:59:59.015: INFO: Created: latency-svc-5825j
Jul  4 08:59:59.033: INFO: Got endpoints: latency-svc-5825j [986.939124ms]
Jul  4 08:59:59.061: INFO: Created: latency-svc-v65fz
Jul  4 08:59:59.135: INFO: Got endpoints: latency-svc-v65fz [1.028404643s]
Jul  4 08:59:59.136: INFO: Created: latency-svc-l66j6
Jul  4 08:59:59.147: INFO: Got endpoints: latency-svc-l66j6 [998.571538ms]
Jul  4 08:59:59.172: INFO: Created: latency-svc-vhcl2
Jul  4 08:59:59.190: INFO: Got endpoints: latency-svc-vhcl2 [912.055892ms]
Jul  4 08:59:59.218: INFO: Created: latency-svc-2s9fs
Jul  4 08:59:59.232: INFO: Got endpoints: latency-svc-2s9fs [945.589202ms]
Jul  4 08:59:59.285: INFO: Created: latency-svc-jndm2
Jul  4 08:59:59.304: INFO: Got endpoints: latency-svc-jndm2 [950.451174ms]
Jul  4 08:59:59.339: INFO: Created: latency-svc-ksrs8
Jul  4 08:59:59.352: INFO: Got endpoints: latency-svc-ksrs8 [873.640097ms]
Jul  4 08:59:59.382: INFO: Created: latency-svc-2tzqp
Jul  4 08:59:59.447: INFO: Got endpoints: latency-svc-2tzqp [948.512134ms]
Jul  4 08:59:59.449: INFO: Created: latency-svc-m62pl
Jul  4 08:59:59.454: INFO: Got endpoints: latency-svc-m62pl [920.28722ms]
Jul  4 08:59:59.483: INFO: Created: latency-svc-lqlps
Jul  4 08:59:59.497: INFO: Got endpoints: latency-svc-lqlps [862.026342ms]
Jul  4 08:59:59.521: INFO: Created: latency-svc-5xd6z
Jul  4 08:59:59.540: INFO: Got endpoints: latency-svc-5xd6z [819.041596ms]
Jul  4 08:59:59.597: INFO: Created: latency-svc-zlxx5
Jul  4 08:59:59.606: INFO: Got endpoints: latency-svc-zlxx5 [806.952255ms]
Jul  4 08:59:59.634: INFO: Created: latency-svc-vnlpm
Jul  4 08:59:59.666: INFO: Got endpoints: latency-svc-vnlpm [831.140443ms]
Jul  4 08:59:59.771: INFO: Created: latency-svc-2p9wb
Jul  4 08:59:59.773: INFO: Got endpoints: latency-svc-2p9wb [797.486482ms]
Jul  4 08:59:59.804: INFO: Created: latency-svc-pgcf4
Jul  4 08:59:59.822: INFO: Got endpoints: latency-svc-pgcf4 [830.825093ms]
Jul  4 08:59:59.844: INFO: Created: latency-svc-vsxpv
Jul  4 08:59:59.864: INFO: Got endpoints: latency-svc-vsxpv [830.717344ms]
Jul  4 08:59:59.937: INFO: Created: latency-svc-7tq7s
Jul  4 08:59:59.941: INFO: Got endpoints: latency-svc-7tq7s [806.030394ms]
Jul  4 08:59:59.977: INFO: Created: latency-svc-jrf76
Jul  4 08:59:59.997: INFO: Got endpoints: latency-svc-jrf76 [849.386134ms]
Jul  4 09:00:00.034: INFO: Created: latency-svc-vmhfq
Jul  4 09:00:00.099: INFO: Got endpoints: latency-svc-vmhfq [908.603702ms]
Jul  4 09:00:00.128: INFO: Created: latency-svc-nmhdp
Jul  4 09:00:00.154: INFO: Got endpoints: latency-svc-nmhdp [921.809155ms]
Jul  4 09:00:00.190: INFO: Created: latency-svc-2rkjz
Jul  4 09:00:00.261: INFO: Got endpoints: latency-svc-2rkjz [957.610911ms]
Jul  4 09:00:00.264: INFO: Created: latency-svc-b8wk5
Jul  4 09:00:00.288: INFO: Got endpoints: latency-svc-b8wk5 [935.968793ms]
Jul  4 09:00:00.334: INFO: Created: latency-svc-wwsst
Jul  4 09:00:00.346: INFO: Got endpoints: latency-svc-wwsst [899.331715ms]
Jul  4 09:00:00.458: INFO: Created: latency-svc-z2q8m
Jul  4 09:00:00.465: INFO: Got endpoints: latency-svc-z2q8m [1.010379199s]
Jul  4 09:00:00.531: INFO: Created: latency-svc-wdqd2
Jul  4 09:00:00.550: INFO: Got endpoints: latency-svc-wdqd2 [1.05252744s]
Jul  4 09:00:00.626: INFO: Created: latency-svc-xhc6x
Jul  4 09:00:00.634: INFO: Got endpoints: latency-svc-xhc6x [1.09453337s]
Jul  4 09:00:00.800: INFO: Created: latency-svc-czzp5
Jul  4 09:00:00.803: INFO: Got endpoints: latency-svc-czzp5 [1.197732108s]
Jul  4 09:00:00.834: INFO: Created: latency-svc-kfzxg
Jul  4 09:00:00.864: INFO: Got endpoints: latency-svc-kfzxg [1.197761461s]
Jul  4 09:00:00.899: INFO: Created: latency-svc-tmrgn
Jul  4 09:00:00.967: INFO: Got endpoints: latency-svc-tmrgn [1.194115267s]
Jul  4 09:00:00.969: INFO: Created: latency-svc-w4lr2
Jul  4 09:00:00.976: INFO: Got endpoints: latency-svc-w4lr2 [1.154439294s]
Jul  4 09:00:01.009: INFO: Created: latency-svc-ftcq5
Jul  4 09:00:01.025: INFO: Got endpoints: latency-svc-ftcq5 [1.160861364s]
Jul  4 09:00:01.051: INFO: Created: latency-svc-cq4r9
Jul  4 09:00:01.061: INFO: Got endpoints: latency-svc-cq4r9 [1.12026661s]
Jul  4 09:00:01.129: INFO: Created: latency-svc-nfccj
Jul  4 09:00:01.134: INFO: Got endpoints: latency-svc-nfccj [1.136968683s]
Jul  4 09:00:01.152: INFO: Created: latency-svc-8qq98
Jul  4 09:00:01.170: INFO: Got endpoints: latency-svc-8qq98 [1.070935869s]
Jul  4 09:00:01.194: INFO: Created: latency-svc-7qnbc
Jul  4 09:00:01.225: INFO: Got endpoints: latency-svc-7qnbc [1.070672022s]
Jul  4 09:00:01.285: INFO: Created: latency-svc-xmrx8
Jul  4 09:00:01.290: INFO: Got endpoints: latency-svc-xmrx8 [1.028600329s]
Jul  4 09:00:01.320: INFO: Created: latency-svc-c2mvz
Jul  4 09:00:01.339: INFO: Got endpoints: latency-svc-c2mvz [1.050336261s]
Jul  4 09:00:01.339: INFO: Latencies: [58.570424ms 92.307562ms 183.001092ms 228.511474ms 269.994034ms 354.307859ms 390.444535ms 500.43873ms 521.954658ms 557.899246ms 594.855406ms 631.548103ms 654.30477ms 683.392878ms 690.655497ms 696.584084ms 706.830853ms 708.929626ms 744.618073ms 755.781206ms 765.608954ms 770.874498ms 772.679302ms 781.191985ms 784.193226ms 789.070928ms 789.190048ms 791.90646ms 793.658398ms 797.486482ms 798.045607ms 798.182959ms 800.814618ms 800.996993ms 801.190935ms 801.419063ms 801.548918ms 802.291159ms 803.684545ms 806.030394ms 806.952255ms 810.140779ms 812.46668ms 813.199667ms 814.252293ms 815.018871ms 815.865965ms 817.984609ms 818.902879ms 819.041596ms 819.155032ms 822.463806ms 824.442835ms 827.592691ms 828.570029ms 830.717344ms 830.825093ms 831.080737ms 831.140443ms 831.310991ms 831.519147ms 836.638026ms 849.237343ms 849.386134ms 849.831995ms 849.931748ms 851.61926ms 855.184672ms 855.436826ms 858.843473ms 860.180068ms 860.600174ms 862.026342ms 866.34276ms 867.544176ms 868.077762ms 870.638561ms 873.640097ms 873.949983ms 879.106512ms 881.046597ms 890.943639ms 891.445717ms 899.331715ms 908.366669ms 908.603702ms 912.055892ms 920.28722ms 921.809155ms 923.014408ms 923.638425ms 924.107893ms 929.153703ms 930.384299ms 932.844385ms 935.968793ms 936.797724ms 944.103358ms 945.589202ms 948.512134ms 950.451174ms 953.639722ms 956.651742ms 957.113094ms 957.610911ms 962.285595ms 962.419198ms 963.364062ms 968.464391ms 968.9392ms 972.32403ms 974.333961ms 975.305083ms 982.852307ms 986.146648ms 986.835414ms 986.939124ms 995.827955ms 998.571538ms 998.658894ms 998.665082ms 999.79149ms 1.002718491s 1.003932357s 1.004503991s 1.005298629s 1.005823057s 1.007660973s 1.009077223s 1.010379199s 1.016602487s 1.028404643s 1.028600329s 1.030646814s 1.032270777s 1.040669023s 1.045180574s 1.047037346s 1.050213366s 1.050336261s 1.05252744s 1.065402484s 1.067140001s 1.070672022s 1.070935869s 1.071231517s 1.080945833s 1.091341954s 1.09453337s 1.097210158s 1.097994007s 1.098527034s 1.098918447s 1.099515979s 1.104243517s 1.106109587s 1.108320495s 1.109300751s 1.111466955s 1.112582673s 1.114172736s 1.119648909s 1.12026661s 1.129031397s 1.130479584s 1.133258042s 1.136335295s 1.136968683s 1.154439294s 1.154495722s 1.160861364s 1.163367438s 1.172742055s 1.175893226s 1.178263563s 1.18339336s 1.191071497s 1.193866251s 1.194115267s 1.196298705s 1.197055989s 1.197732108s 1.197761461s 1.199979848s 1.20253058s 1.21993056s 1.232283494s 1.237562665s 1.243237582s 1.247218322s 1.248446552s 1.26025249s 1.260745804s 1.260930503s 1.26145751s 1.262591626s 1.263698586s 1.267836991s 1.281656109s 1.313965844s]
Jul  4 09:00:01.339: INFO: 50 %ile: 950.451174ms
Jul  4 09:00:01.339: INFO: 90 %ile: 1.197055989s
Jul  4 09:00:01.339: INFO: 99 %ile: 1.281656109s
Jul  4 09:00:01.339: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:00:01.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-7880" for this suite.

• [SLOW TEST:18.199 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":278,"completed":120,"skipped":2047,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:00:01.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-02013794-1178-4778-b7db-8ee51e545da3
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-02013794-1178-4778-b7db-8ee51e545da3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:00:22.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-60" for this suite.

• [SLOW TEST:20.965 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":2055,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:00:22.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-2919
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-2919
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2919
Jul  4 09:00:22.871: INFO: Found 0 stateful pods, waiting for 1
Jul  4 09:00:32.881: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jul  4 09:00:32.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  4 09:00:33.351: INFO: stderr: "I0704 09:00:33.201775    1897 log.go:172] (0xc000ad6000) (0xc0003eb540) Create stream\nI0704 09:00:33.201816    1897 log.go:172] (0xc000ad6000) (0xc0003eb540) Stream added, broadcasting: 1\nI0704 09:00:33.203507    1897 log.go:172] (0xc000ad6000) Reply frame received for 1\nI0704 09:00:33.203557    1897 log.go:172] (0xc000ad6000) (0xc0008be000) Create stream\nI0704 09:00:33.203576    1897 log.go:172] (0xc000ad6000) (0xc0008be000) Stream added, broadcasting: 3\nI0704 09:00:33.204238    1897 log.go:172] (0xc000ad6000) Reply frame received for 3\nI0704 09:00:33.204252    1897 log.go:172] (0xc000ad6000) (0xc0008be140) Create stream\nI0704 09:00:33.204258    1897 log.go:172] (0xc000ad6000) (0xc0008be140) Stream added, broadcasting: 5\nI0704 09:00:33.205017    1897 log.go:172] (0xc000ad6000) Reply frame received for 5\nI0704 09:00:33.270013    1897 log.go:172] (0xc000ad6000) Data frame received for 5\nI0704 09:00:33.270033    1897 log.go:172] (0xc0008be140) (5) Data frame handling\nI0704 09:00:33.270046    1897 log.go:172] (0xc0008be140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0704 09:00:33.346476    1897 log.go:172] (0xc000ad6000) Data frame received for 3\nI0704 09:00:33.346515    1897 log.go:172] (0xc0008be000) (3) Data frame handling\nI0704 09:00:33.346527    1897 log.go:172] (0xc0008be000) (3) Data frame sent\nI0704 09:00:33.346538    1897 log.go:172] (0xc000ad6000) Data frame received for 3\nI0704 09:00:33.346545    1897 log.go:172] (0xc0008be000) (3) Data frame handling\nI0704 09:00:33.346569    1897 log.go:172] (0xc000ad6000) Data frame received for 5\nI0704 09:00:33.346579    1897 log.go:172] (0xc0008be140) (5) Data frame handling\nI0704 09:00:33.347945    1897 log.go:172] (0xc000ad6000) Data frame received for 1\nI0704 09:00:33.347955    1897 log.go:172] (0xc0003eb540) (1) Data frame handling\nI0704 09:00:33.347960    1897 log.go:172] (0xc0003eb540) (1) Data frame sent\nI0704 09:00:33.347966    1897 log.go:172] (0xc000ad6000) (0xc0003eb540) Stream removed, broadcasting: 1\nI0704 09:00:33.347973    1897 log.go:172] (0xc000ad6000) Go away received\nI0704 09:00:33.348248    1897 log.go:172] (0xc000ad6000) (0xc0003eb540) Stream removed, broadcasting: 1\nI0704 09:00:33.348259    1897 log.go:172] (0xc000ad6000) (0xc0008be000) Stream removed, broadcasting: 3\nI0704 09:00:33.348264    1897 log.go:172] (0xc000ad6000) (0xc0008be140) Stream removed, broadcasting: 5\n"
Jul  4 09:00:33.351: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  4 09:00:33.351: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  4 09:00:33.361: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul  4 09:00:43.377: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  4 09:00:43.377: INFO: Waiting for statefulset status.replicas updated to 0
Jul  4 09:00:43.496: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  4 09:00:43.496: INFO: ss-0  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:22 +0000 UTC  }]
Jul  4 09:00:43.497: INFO: ss-1                 Pending         []
Jul  4 09:00:43.497: INFO: 
Jul  4 09:00:43.497: INFO: StatefulSet ss has not reached scale 3, at 2
Jul  4 09:00:44.503: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.89490548s
Jul  4 09:00:45.508: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.888149975s
Jul  4 09:00:46.513: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.883441804s
Jul  4 09:00:47.519: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.878157084s
Jul  4 09:00:48.524: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.87282758s
Jul  4 09:00:49.530: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.867035967s
Jul  4 09:00:50.535: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.861459252s
Jul  4 09:00:51.541: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.856330747s
Jul  4 09:00:52.547: INFO: Verifying statefulset ss doesn't scale past 3 for another 850.344639ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2919
Jul  4 09:00:53.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:00:53.783: INFO: stderr: "I0704 09:00:53.693066    1914 log.go:172] (0xc0002f66e0) (0xc0001fc280) Create stream\nI0704 09:00:53.693321    1914 log.go:172] (0xc0002f66e0) (0xc0001fc280) Stream added, broadcasting: 1\nI0704 09:00:53.696611    1914 log.go:172] (0xc0002f66e0) Reply frame received for 1\nI0704 09:00:53.696651    1914 log.go:172] (0xc0002f66e0) (0xc0006119a0) Create stream\nI0704 09:00:53.696663    1914 log.go:172] (0xc0002f66e0) (0xc0006119a0) Stream added, broadcasting: 3\nI0704 09:00:53.701391    1914 log.go:172] (0xc0002f66e0) Reply frame received for 3\nI0704 09:00:53.701419    1914 log.go:172] (0xc0002f66e0) (0xc000773360) Create stream\nI0704 09:00:53.701427    1914 log.go:172] (0xc0002f66e0) (0xc000773360) Stream added, broadcasting: 5\nI0704 09:00:53.702517    1914 log.go:172] (0xc0002f66e0) Reply frame received for 5\nI0704 09:00:53.776079    1914 log.go:172] (0xc0002f66e0) Data frame received for 5\nI0704 09:00:53.776226    1914 log.go:172] (0xc0002f66e0) Data frame received for 3\nI0704 09:00:53.776276    1914 log.go:172] (0xc0006119a0) (3) Data frame handling\nI0704 09:00:53.776300    1914 log.go:172] (0xc0006119a0) (3) Data frame sent\nI0704 09:00:53.776317    1914 log.go:172] (0xc0002f66e0) Data frame received for 3\nI0704 09:00:53.776330    1914 log.go:172] (0xc0006119a0) (3) Data frame handling\nI0704 09:00:53.776359    1914 log.go:172] (0xc000773360) (5) Data frame handling\nI0704 09:00:53.776390    1914 log.go:172] (0xc000773360) (5) Data frame sent\nI0704 09:00:53.776406    1914 log.go:172] (0xc0002f66e0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0704 09:00:53.776418    1914 log.go:172] (0xc000773360) (5) Data frame handling\nI0704 09:00:53.777751    1914 log.go:172] (0xc0002f66e0) Data frame received for 1\nI0704 09:00:53.777777    1914 log.go:172] (0xc0001fc280) (1) Data frame handling\nI0704 09:00:53.777809    1914 log.go:172] (0xc0001fc280) (1) Data frame sent\nI0704 09:00:53.777999    1914 log.go:172] (0xc0002f66e0) (0xc0001fc280) Stream removed, broadcasting: 1\nI0704 09:00:53.778029    1914 log.go:172] (0xc0002f66e0) Go away received\nI0704 09:00:53.778497    1914 log.go:172] (0xc0002f66e0) (0xc0001fc280) Stream removed, broadcasting: 1\nI0704 09:00:53.778524    1914 log.go:172] (0xc0002f66e0) (0xc0006119a0) Stream removed, broadcasting: 3\nI0704 09:00:53.778537    1914 log.go:172] (0xc0002f66e0) (0xc000773360) Stream removed, broadcasting: 5\n"
Jul  4 09:00:53.783: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  4 09:00:53.783: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  4 09:00:53.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:00:53.991: INFO: stderr: "I0704 09:00:53.911760    1933 log.go:172] (0xc00052c2c0) (0xc00071d540) Create stream\nI0704 09:00:53.911814    1933 log.go:172] (0xc00052c2c0) (0xc00071d540) Stream added, broadcasting: 1\nI0704 09:00:53.914178    1933 log.go:172] (0xc00052c2c0) Reply frame received for 1\nI0704 09:00:53.914217    1933 log.go:172] (0xc00052c2c0) (0xc00095e000) Create stream\nI0704 09:00:53.914230    1933 log.go:172] (0xc00052c2c0) (0xc00095e000) Stream added, broadcasting: 3\nI0704 09:00:53.914988    1933 log.go:172] (0xc00052c2c0) Reply frame received for 3\nI0704 09:00:53.915016    1933 log.go:172] (0xc00052c2c0) (0xc000a18000) Create stream\nI0704 09:00:53.915024    1933 log.go:172] (0xc00052c2c0) (0xc000a18000) Stream added, broadcasting: 5\nI0704 09:00:53.915820    1933 log.go:172] (0xc00052c2c0) Reply frame received for 5\nI0704 09:00:53.984462    1933 log.go:172] (0xc00052c2c0) Data frame received for 3\nI0704 09:00:53.984499    1933 log.go:172] (0xc00095e000) (3) Data frame handling\nI0704 09:00:53.984537    1933 log.go:172] (0xc00095e000) (3) Data frame sent\nI0704 09:00:53.984553    1933 log.go:172] (0xc00052c2c0) Data frame received for 3\nI0704 09:00:53.984565    1933 log.go:172] (0xc00095e000) (3) Data frame handling\nI0704 09:00:53.984669    1933 log.go:172] (0xc00052c2c0) Data frame received for 5\nI0704 09:00:53.984706    1933 log.go:172] (0xc000a18000) (5) Data frame handling\nI0704 09:00:53.984737    1933 log.go:172] (0xc000a18000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0704 09:00:53.984902    1933 log.go:172] (0xc00052c2c0) Data frame received for 5\nI0704 09:00:53.984923    1933 log.go:172] (0xc000a18000) (5) Data frame handling\nI0704 09:00:53.987024    1933 log.go:172] (0xc00052c2c0) Data frame received for 1\nI0704 09:00:53.987054    1933 log.go:172] (0xc00071d540) (1) Data frame handling\nI0704 09:00:53.987070    1933 log.go:172] (0xc00071d540) (1) Data frame sent\nI0704 09:00:53.987095    1933 log.go:172] (0xc00052c2c0) (0xc00071d540) Stream removed, broadcasting: 1\nI0704 09:00:53.987128    1933 log.go:172] (0xc00052c2c0) Go away received\nI0704 09:00:53.987487    1933 log.go:172] (0xc00052c2c0) (0xc00071d540) Stream removed, broadcasting: 1\nI0704 09:00:53.987510    1933 log.go:172] (0xc00052c2c0) (0xc00095e000) Stream removed, broadcasting: 3\nI0704 09:00:53.987520    1933 log.go:172] (0xc00052c2c0) (0xc000a18000) Stream removed, broadcasting: 5\n"
Jul  4 09:00:53.991: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  4 09:00:53.991: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  4 09:00:53.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:00:54.196: INFO: stderr: "I0704 09:00:54.126316    1956 log.go:172] (0xc00080ab00) (0xc00079b5e0) Create stream\nI0704 09:00:54.126385    1956 log.go:172] (0xc00080ab00) (0xc00079b5e0) Stream added, broadcasting: 1\nI0704 09:00:54.128776    1956 log.go:172] (0xc00080ab00) Reply frame received for 1\nI0704 09:00:54.128805    1956 log.go:172] (0xc00080ab00) (0xc000804000) Create stream\nI0704 09:00:54.128819    1956 log.go:172] (0xc00080ab00) (0xc000804000) Stream added, broadcasting: 3\nI0704 09:00:54.130092    1956 log.go:172] (0xc00080ab00) Reply frame received for 3\nI0704 09:00:54.130118    1956 log.go:172] (0xc00080ab00) (0xc000804140) Create stream\nI0704 09:00:54.130124    1956 log.go:172] (0xc00080ab00) (0xc000804140) Stream added, broadcasting: 5\nI0704 09:00:54.130932    1956 log.go:172] (0xc00080ab00) Reply frame received for 5\nI0704 09:00:54.189852    1956 log.go:172] (0xc00080ab00) Data frame received for 5\nI0704 09:00:54.189904    1956 log.go:172] (0xc000804140) (5) Data frame handling\nI0704 09:00:54.189930    1956 log.go:172] (0xc000804140) (5) Data frame sent\nI0704 09:00:54.189943    1956 log.go:172] (0xc00080ab00) Data frame received for 5\nI0704 09:00:54.189955    1956 log.go:172] (0xc000804140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0704 09:00:54.190028    1956 log.go:172] (0xc00080ab00) Data frame received for 3\nI0704 09:00:54.190130    1956 log.go:172] (0xc000804000) (3) Data frame handling\nI0704 09:00:54.190188    1956 log.go:172] (0xc000804000) (3) Data frame sent\nI0704 09:00:54.190227    1956 log.go:172] (0xc00080ab00) Data frame received for 3\nI0704 09:00:54.190258    1956 log.go:172] (0xc000804000) (3) Data frame handling\nI0704 09:00:54.192000    1956 log.go:172] (0xc00080ab00) Data frame received for 1\nI0704 09:00:54.192051    1956 log.go:172] (0xc00079b5e0) (1) Data frame handling\nI0704 09:00:54.192091    1956 log.go:172] (0xc00079b5e0) (1) Data frame sent\nI0704 09:00:54.192139    1956 log.go:172] (0xc00080ab00) (0xc00079b5e0) Stream removed, broadcasting: 1\nI0704 09:00:54.192175    1956 log.go:172] (0xc00080ab00) Go away received\nI0704 09:00:54.192547    1956 log.go:172] (0xc00080ab00) (0xc00079b5e0) Stream removed, broadcasting: 1\nI0704 09:00:54.192565    1956 log.go:172] (0xc00080ab00) (0xc000804000) Stream removed, broadcasting: 3\nI0704 09:00:54.192575    1956 log.go:172] (0xc00080ab00) (0xc000804140) Stream removed, broadcasting: 5\n"
Jul  4 09:00:54.196: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  4 09:00:54.196: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  4 09:00:54.214: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  4 09:00:54.215: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  4 09:00:54.215: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jul  4 09:00:54.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  4 09:00:54.430: INFO: stderr: "I0704 09:00:54.346724    1978 log.go:172] (0xc000546dc0) (0xc0006c5ae0) Create stream\nI0704 09:00:54.346775    1978 log.go:172] (0xc000546dc0) (0xc0006c5ae0) Stream added, broadcasting: 1\nI0704 09:00:54.349316    1978 log.go:172] (0xc000546dc0) Reply frame received for 1\nI0704 09:00:54.349356    1978 log.go:172] (0xc000546dc0) (0xc0008de000) Create stream\nI0704 09:00:54.349365    1978 log.go:172] (0xc000546dc0) (0xc0008de000) Stream added, broadcasting: 3\nI0704 09:00:54.350453    1978 log.go:172] (0xc000546dc0) Reply frame received for 3\nI0704 09:00:54.350496    1978 log.go:172] (0xc000546dc0) (0xc0006c5cc0) Create stream\nI0704 09:00:54.350508    1978 log.go:172] (0xc000546dc0) (0xc0006c5cc0) Stream added, broadcasting: 5\nI0704 09:00:54.351369    1978 log.go:172] (0xc000546dc0) Reply frame received for 5\nI0704 09:00:54.422293    1978 log.go:172] (0xc000546dc0) Data frame received for 3\nI0704 09:00:54.422340    1978 log.go:172] (0xc0008de000) (3) Data frame handling\nI0704 09:00:54.422370    1978 log.go:172] (0xc0008de000) (3) Data frame sent\nI0704 09:00:54.422424    1978 log.go:172] (0xc000546dc0) Data frame received for 5\nI0704 09:00:54.422439    1978 log.go:172] (0xc0006c5cc0) (5) Data frame handling\nI0704 09:00:54.422457    1978 log.go:172] (0xc0006c5cc0) (5) Data frame sent\nI0704 09:00:54.422475    1978 log.go:172] (0xc000546dc0) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0704 09:00:54.422493    1978 log.go:172] (0xc0006c5cc0) (5) Data frame handling\nI0704 09:00:54.422591    1978 log.go:172] (0xc000546dc0) Data frame received for 3\nI0704 09:00:54.422618    1978 log.go:172] (0xc0008de000) (3) Data frame handling\nI0704 09:00:54.424354    1978 log.go:172] (0xc000546dc0) Data frame received for 1\nI0704 09:00:54.424384    1978 log.go:172] (0xc0006c5ae0) (1) Data frame handling\nI0704 09:00:54.424420    1978 log.go:172] (0xc0006c5ae0) (1) Data frame sent\nI0704 09:00:54.424554    1978 log.go:172] (0xc000546dc0) (0xc0006c5ae0) Stream removed, broadcasting: 1\nI0704 09:00:54.424985    1978 log.go:172] (0xc000546dc0) Go away received\nI0704 09:00:54.425029    1978 log.go:172] (0xc000546dc0) (0xc0006c5ae0) Stream removed, broadcasting: 1\nI0704 09:00:54.425058    1978 log.go:172] (0xc000546dc0) (0xc0008de000) Stream removed, broadcasting: 3\nI0704 09:00:54.425083    1978 log.go:172] (0xc000546dc0) (0xc0006c5cc0) Stream removed, broadcasting: 5\n"
Jul  4 09:00:54.430: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  4 09:00:54.430: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  4 09:00:54.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  4 09:00:54.699: INFO: stderr: "I0704 09:00:54.562799    2002 log.go:172] (0xc00072ca50) (0xc00070e000) Create stream\nI0704 09:00:54.562868    2002 log.go:172] (0xc00072ca50) (0xc00070e000) Stream added, broadcasting: 1\nI0704 09:00:54.566747    2002 log.go:172] (0xc00072ca50) Reply frame received for 1\nI0704 09:00:54.566800    2002 log.go:172] (0xc00072ca50) (0xc0006d4000) Create stream\nI0704 09:00:54.566815    2002 log.go:172] (0xc00072ca50) (0xc0006d4000) Stream added, broadcasting: 3\nI0704 09:00:54.567938    2002 log.go:172] (0xc00072ca50) Reply frame received for 3\nI0704 09:00:54.567964    2002 log.go:172] (0xc00072ca50) (0xc000613ae0) Create stream\nI0704 09:00:54.567976    2002 log.go:172] (0xc00072ca50) (0xc000613ae0) Stream added, broadcasting: 5\nI0704 09:00:54.568978    2002 log.go:172] (0xc00072ca50) Reply frame received for 5\nI0704 09:00:54.640533    2002 log.go:172] (0xc00072ca50) Data frame received for 5\nI0704 09:00:54.640568    2002 log.go:172] (0xc000613ae0) (5) Data frame handling\nI0704 09:00:54.640599    2002 log.go:172] (0xc000613ae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0704 09:00:54.691588    2002 log.go:172] (0xc00072ca50) Data frame received for 3\nI0704 09:00:54.691620    2002 log.go:172] (0xc0006d4000) (3) Data frame handling\nI0704 09:00:54.691640    2002 log.go:172] (0xc0006d4000) (3) Data frame sent\nI0704 09:00:54.692419    2002 log.go:172] (0xc00072ca50) Data frame received for 5\nI0704 09:00:54.692481    2002 log.go:172] (0xc000613ae0) (5) Data frame handling\nI0704 09:00:54.692532    2002 log.go:172] (0xc00072ca50) Data frame received for 3\nI0704 09:00:54.692551    2002 log.go:172] (0xc0006d4000) (3) Data frame handling\nI0704 09:00:54.694842    2002 log.go:172] (0xc00072ca50) Data frame received for 1\nI0704 09:00:54.694864    2002 log.go:172] (0xc00070e000) (1) Data frame handling\nI0704 09:00:54.694873    2002 log.go:172] (0xc00070e000) (1) Data frame sent\nI0704 09:00:54.694884    2002 log.go:172] (0xc00072ca50) (0xc00070e000) Stream removed, broadcasting: 1\nI0704 09:00:54.694948    2002 log.go:172] (0xc00072ca50) Go away received\nI0704 09:00:54.695217    2002 log.go:172] (0xc00072ca50) (0xc00070e000) Stream removed, broadcasting: 1\nI0704 09:00:54.695236    2002 log.go:172] (0xc00072ca50) (0xc0006d4000) Stream removed, broadcasting: 3\nI0704 09:00:54.695247    2002 log.go:172] (0xc00072ca50) (0xc000613ae0) Stream removed, broadcasting: 5\n"
Jul  4 09:00:54.699: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  4 09:00:54.699: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  4 09:00:54.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  4 09:00:54.938: INFO: stderr: "I0704 09:00:54.827953    2026 log.go:172] (0xc0009fc630) (0xc000697d60) Create stream\nI0704 09:00:54.828013    2026 log.go:172] (0xc0009fc630) (0xc000697d60) Stream added, broadcasting: 1\nI0704 09:00:54.831864    2026 log.go:172] (0xc0009fc630) Reply frame received for 1\nI0704 09:00:54.831903    2026 log.go:172] (0xc0009fc630) (0xc0005d6640) Create stream\nI0704 09:00:54.831920    2026 log.go:172] (0xc0009fc630) (0xc0005d6640) Stream added, broadcasting: 3\nI0704 09:00:54.833061    2026 log.go:172] (0xc0009fc630) Reply frame received for 3\nI0704 09:00:54.833096    2026 log.go:172] (0xc0009fc630) (0xc0002bd400) Create stream\nI0704 09:00:54.833250    2026 log.go:172] (0xc0009fc630) (0xc0002bd400) Stream added, broadcasting: 5\nI0704 09:00:54.834345    2026 log.go:172] (0xc0009fc630) Reply frame received for 5\nI0704 09:00:54.893520    2026 log.go:172] (0xc0009fc630) Data frame received for 5\nI0704 09:00:54.893551    2026 log.go:172] (0xc0002bd400) (5) Data frame handling\nI0704 09:00:54.893571    2026 log.go:172] (0xc0002bd400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0704 09:00:54.931163    2026 log.go:172] (0xc0009fc630) Data frame received for 3\nI0704 09:00:54.931302    2026 log.go:172] (0xc0005d6640) (3) Data frame handling\nI0704 09:00:54.931373    2026 log.go:172] (0xc0005d6640) (3) Data frame sent\nI0704 09:00:54.931457    2026 log.go:172] (0xc0009fc630) Data frame received for 3\nI0704 09:00:54.931533    2026 log.go:172] (0xc0005d6640) (3) Data frame handling\nI0704 09:00:54.931930    2026 log.go:172] (0xc0009fc630) Data frame received for 5\nI0704 09:00:54.931962    2026 log.go:172] (0xc0002bd400) (5) Data frame handling\nI0704 09:00:54.933655    2026 log.go:172] (0xc0009fc630) Data frame received for 1\nI0704 09:00:54.933695    2026 log.go:172] (0xc000697d60) (1) Data frame handling\nI0704 09:00:54.933735    2026 log.go:172] (0xc000697d60) (1) Data frame sent\nI0704 09:00:54.933945    2026 log.go:172] (0xc0009fc630) (0xc000697d60) Stream removed, broadcasting: 1\nI0704 09:00:54.934267    2026 log.go:172] (0xc0009fc630) (0xc000697d60) Stream removed, broadcasting: 1\nI0704 09:00:54.934291    2026 log.go:172] (0xc0009fc630) (0xc0005d6640) Stream removed, broadcasting: 3\nI0704 09:00:54.934301    2026 log.go:172] (0xc0009fc630) (0xc0002bd400) Stream removed, broadcasting: 5\n"
Jul  4 09:00:54.938: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  4 09:00:54.938: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  4 09:00:54.938: INFO: Waiting for statefulset status.replicas updated to 0
Jul  4 09:00:54.950: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Jul  4 09:01:04.960: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  4 09:01:04.960: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul  4 09:01:04.960: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul  4 09:01:04.973: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  4 09:01:04.973: INFO: ss-0  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:22 +0000 UTC  }]
Jul  4 09:01:04.973: INFO: ss-1  jerma-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  }]
Jul  4 09:01:04.973: INFO: ss-2  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  }]
Jul  4 09:01:04.973: INFO: 
Jul  4 09:01:04.973: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  4 09:01:06.009: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  4 09:01:06.009: INFO: ss-0  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:22 +0000 UTC  }]
Jul  4 09:01:06.009: INFO: ss-1  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  }]
Jul  4 09:01:06.009: INFO: ss-2  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  }]
Jul  4 09:01:06.009: INFO: 
Jul  4 09:01:06.009: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  4 09:01:07.014: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  4 09:01:07.014: INFO: ss-0  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:22 +0000 UTC  }]
Jul  4 09:01:07.014: INFO: ss-1  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  }]
Jul  4 09:01:07.014: INFO: ss-2  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  }]
Jul  4 09:01:07.014: INFO: 
Jul  4 09:01:07.014: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  4 09:01:08.020: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  4 09:01:08.020: INFO: ss-0  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:22 +0000 UTC  }]
Jul  4 09:01:08.020: INFO: ss-1  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  }]
Jul  4 09:01:08.020: INFO: ss-2  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  }]
Jul  4 09:01:08.020: INFO: 
Jul  4 09:01:08.020: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  4 09:01:09.038: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul  4 09:01:09.038: INFO: ss-1  jerma-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  }]
Jul  4 09:01:09.038: INFO: 
Jul  4 09:01:09.038: INFO: StatefulSet ss has not reached scale 0, at 1
Jul  4 09:01:10.043: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul  4 09:01:10.043: INFO: ss-1  jerma-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  }]
Jul  4 09:01:10.043: INFO: 
Jul  4 09:01:10.043: INFO: StatefulSet ss has not reached scale 0, at 1
Jul  4 09:01:11.047: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul  4 09:01:11.047: INFO: ss-1  jerma-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  }]
Jul  4 09:01:11.047: INFO: 
Jul  4 09:01:11.047: INFO: StatefulSet ss has not reached scale 0, at 1
Jul  4 09:01:12.052: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul  4 09:01:12.052: INFO: ss-1  jerma-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  }]
Jul  4 09:01:12.053: INFO: 
Jul  4 09:01:12.053: INFO: StatefulSet ss has not reached scale 0, at 1
Jul  4 09:01:13.057: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul  4 09:01:13.057: INFO: ss-1  jerma-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  }]
Jul  4 09:01:13.057: INFO: 
Jul  4 09:01:13.057: INFO: StatefulSet ss has not reached scale 0, at 1
Jul  4 09:01:14.062: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul  4 09:01:14.062: INFO: ss-1  jerma-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-04 09:00:43 +0000 UTC  }]
Jul  4 09:01:14.062: INFO: 
Jul  4 09:01:14.062: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2919
Jul  4 09:01:15.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:01:15.206: INFO: rc: 1
Jul  4 09:01:15.206: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul  4 09:01:25.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:01:25.305: INFO: rc: 1
Jul  4 09:01:25.305: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:01:35.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:01:35.406: INFO: rc: 1
Jul  4 09:01:35.406: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:01:45.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:01:45.508: INFO: rc: 1
Jul  4 09:01:45.508: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:01:55.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:01:55.616: INFO: rc: 1
Jul  4 09:01:55.616: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:02:05.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:02:08.419: INFO: rc: 1
Jul  4 09:02:08.419: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:02:18.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:02:18.519: INFO: rc: 1
Jul  4 09:02:18.519: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:02:28.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:02:28.612: INFO: rc: 1
Jul  4 09:02:28.612: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:02:38.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:02:38.714: INFO: rc: 1
Jul  4 09:02:38.714: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:02:48.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:02:48.824: INFO: rc: 1
Jul  4 09:02:48.824: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:02:58.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:02:58.920: INFO: rc: 1
Jul  4 09:02:58.920: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:03:08.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:03:09.235: INFO: rc: 1
Jul  4 09:03:09.236: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:03:19.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:03:19.331: INFO: rc: 1
Jul  4 09:03:19.331: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:03:29.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:03:29.435: INFO: rc: 1
Jul  4 09:03:29.435: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:03:39.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:03:39.534: INFO: rc: 1
Jul  4 09:03:39.534: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:03:49.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:03:49.640: INFO: rc: 1
Jul  4 09:03:49.640: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:03:59.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:03:59.757: INFO: rc: 1
Jul  4 09:03:59.757: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:04:09.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:04:09.862: INFO: rc: 1
Jul  4 09:04:09.862: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:04:19.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:04:19.953: INFO: rc: 1
Jul  4 09:04:19.953: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:04:29.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:04:30.054: INFO: rc: 1
Jul  4 09:04:30.054: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:04:40.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:04:40.155: INFO: rc: 1
Jul  4 09:04:40.155: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:04:50.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:04:50.254: INFO: rc: 1
Jul  4 09:04:50.254: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:05:00.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:05:00.349: INFO: rc: 1
Jul  4 09:05:00.349: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:05:10.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:05:10.445: INFO: rc: 1
Jul  4 09:05:10.445: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:05:20.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:05:20.542: INFO: rc: 1
Jul  4 09:05:20.542: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:05:30.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:05:30.639: INFO: rc: 1
Jul  4 09:05:30.639: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:05:40.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:05:40.733: INFO: rc: 1
Jul  4 09:05:40.733: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:05:50.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:05:50.827: INFO: rc: 1
Jul  4 09:05:50.827: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:06:00.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:06:00.924: INFO: rc: 1
Jul  4 09:06:00.924: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:06:10.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:06:11.026: INFO: rc: 1
Jul  4 09:06:11.026: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jul  4 09:06:21.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2919 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:06:21.119: INFO: rc: 1
Jul  4 09:06:21.120: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: 
Jul  4 09:06:21.120: INFO: Scaling statefulset ss to 0
Jul  4 09:06:21.128: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jul  4 09:06:21.130: INFO: Deleting all statefulset in ns statefulset-2919
Jul  4 09:06:21.133: INFO: Scaling statefulset ss to 0
Jul  4 09:06:21.142: INFO: Waiting for statefulset status.replicas updated to 0
Jul  4 09:06:21.144: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:06:21.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2919" for this suite.

• [SLOW TEST:358.850 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":122,"skipped":2064,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:06:21.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Jul  4 09:06:21.239: INFO: Waiting up to 5m0s for pod "pod-6c869e0e-912f-4789-9ffd-08a46c00df43" in namespace "emptydir-3162" to be "success or failure"
Jul  4 09:06:21.244: INFO: Pod "pod-6c869e0e-912f-4789-9ffd-08a46c00df43": Phase="Pending", Reason="", readiness=false. Elapsed: 5.166967ms
Jul  4 09:06:23.303: INFO: Pod "pod-6c869e0e-912f-4789-9ffd-08a46c00df43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063928959s
Jul  4 09:06:25.307: INFO: Pod "pod-6c869e0e-912f-4789-9ffd-08a46c00df43": Phase="Running", Reason="", readiness=true. Elapsed: 4.068102754s
Jul  4 09:06:27.339: INFO: Pod "pod-6c869e0e-912f-4789-9ffd-08a46c00df43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100551022s
STEP: Saw pod success
Jul  4 09:06:27.339: INFO: Pod "pod-6c869e0e-912f-4789-9ffd-08a46c00df43" satisfied condition "success or failure"
Jul  4 09:06:27.342: INFO: Trying to get logs from node jerma-worker2 pod pod-6c869e0e-912f-4789-9ffd-08a46c00df43 container test-container: 
STEP: delete the pod
Jul  4 09:06:27.375: INFO: Waiting for pod pod-6c869e0e-912f-4789-9ffd-08a46c00df43 to disappear
Jul  4 09:06:27.379: INFO: Pod pod-6c869e0e-912f-4789-9ffd-08a46c00df43 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:06:27.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3162" for this suite.

• [SLOW TEST:6.221 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2065,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:06:27.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul  4 09:06:32.020: INFO: Successfully updated pod "pod-update-activedeadlineseconds-88078d7c-9b93-4a52-9e73-5f6fc1c629c8"
Jul  4 09:06:32.020: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-88078d7c-9b93-4a52-9e73-5f6fc1c629c8" in namespace "pods-8270" to be "terminated due to deadline exceeded"
Jul  4 09:06:32.423: INFO: Pod "pod-update-activedeadlineseconds-88078d7c-9b93-4a52-9e73-5f6fc1c629c8": Phase="Running", Reason="", readiness=true. Elapsed: 402.984084ms
Jul  4 09:06:34.427: INFO: Pod "pod-update-activedeadlineseconds-88078d7c-9b93-4a52-9e73-5f6fc1c629c8": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.407543686s
Jul  4 09:06:34.427: INFO: Pod "pod-update-activedeadlineseconds-88078d7c-9b93-4a52-9e73-5f6fc1c629c8" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:06:34.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8270" for this suite.

• [SLOW TEST:7.051 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2078,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:06:34.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-61cc34e8-107b-4843-90d6-dfe782cd008c
STEP: Creating configMap with name cm-test-opt-upd-b360fc58-f6ec-4a26-8ab2-37afc4613540
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-61cc34e8-107b-4843-90d6-dfe782cd008c
STEP: Updating configmap cm-test-opt-upd-b360fc58-f6ec-4a26-8ab2-37afc4613540
STEP: Creating configMap with name cm-test-opt-create-7a2c9375-e4ad-463c-a18b-99f4e09206eb
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:08:05.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2307" for this suite.

• [SLOW TEST:90.749 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2083,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:08:05.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jul  4 09:08:10.450: INFO: Successfully updated pod "annotationupdateaf31a543-af2d-410b-bf2a-f699500d0aa7"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:08:12.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8033" for this suite.

• [SLOW TEST:7.312 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2165,"failed":0}
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:08:12.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0704 09:08:24.911383       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  4 09:08:24.911: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:08:24.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1800" for this suite.

• [SLOW TEST:12.417 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":127,"skipped":2165,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:08:24.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-7270
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul  4 09:08:25.277: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul  4 09:08:49.638: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.77:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7270 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 09:08:49.638: INFO: >>> kubeConfig: /root/.kube/config
I0704 09:08:49.675429       6 log.go:172] (0xc002c404d0) (0xc002801b80) Create stream
I0704 09:08:49.675455       6 log.go:172] (0xc002c404d0) (0xc002801b80) Stream added, broadcasting: 1
I0704 09:08:49.676837       6 log.go:172] (0xc002c404d0) Reply frame received for 1
I0704 09:08:49.676892       6 log.go:172] (0xc002c404d0) (0xc0017560a0) Create stream
I0704 09:08:49.676910       6 log.go:172] (0xc002c404d0) (0xc0017560a0) Stream added, broadcasting: 3
I0704 09:08:49.677927       6 log.go:172] (0xc002c404d0) Reply frame received for 3
I0704 09:08:49.677951       6 log.go:172] (0xc002c404d0) (0xc002801c20) Create stream
I0704 09:08:49.677958       6 log.go:172] (0xc002c404d0) (0xc002801c20) Stream added, broadcasting: 5
I0704 09:08:49.678609       6 log.go:172] (0xc002c404d0) Reply frame received for 5
I0704 09:08:49.759433       6 log.go:172] (0xc002c404d0) Data frame received for 3
I0704 09:08:49.759467       6 log.go:172] (0xc0017560a0) (3) Data frame handling
I0704 09:08:49.759485       6 log.go:172] (0xc0017560a0) (3) Data frame sent
I0704 09:08:49.759492       6 log.go:172] (0xc002c404d0) Data frame received for 3
I0704 09:08:49.759498       6 log.go:172] (0xc0017560a0) (3) Data frame handling
I0704 09:08:49.759610       6 log.go:172] (0xc002c404d0) Data frame received for 5
I0704 09:08:49.759633       6 log.go:172] (0xc002801c20) (5) Data frame handling
I0704 09:08:49.761375       6 log.go:172] (0xc002c404d0) Data frame received for 1
I0704 09:08:49.761392       6 log.go:172] (0xc002801b80) (1) Data frame handling
I0704 09:08:49.761402       6 log.go:172] (0xc002801b80) (1) Data frame sent
I0704 09:08:49.761417       6 log.go:172] (0xc002c404d0) (0xc002801b80) Stream removed, broadcasting: 1
I0704 09:08:49.761456       6 log.go:172] (0xc002c404d0) Go away received
I0704 09:08:49.761483       6 log.go:172] (0xc002c404d0) (0xc002801b80) Stream removed, broadcasting: 1
I0704 09:08:49.761544       6 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0xc0017560a0), 0x5:(*spdystream.Stream)(0xc002801c20)}
I0704 09:08:49.761575       6 log.go:172] (0xc002c404d0) (0xc0017560a0) Stream removed, broadcasting: 3
I0704 09:08:49.761592       6 log.go:172] (0xc002c404d0) (0xc002801c20) Stream removed, broadcasting: 5
Jul  4 09:08:49.761: INFO: Found all expected endpoints: [netserver-0]
Jul  4 09:08:49.764: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.88:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7270 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 09:08:49.764: INFO: >>> kubeConfig: /root/.kube/config
I0704 09:08:49.795679       6 log.go:172] (0xc0044904d0) (0xc00199c780) Create stream
I0704 09:08:49.795714       6 log.go:172] (0xc0044904d0) (0xc00199c780) Stream added, broadcasting: 1
I0704 09:08:49.797718       6 log.go:172] (0xc0044904d0) Reply frame received for 1
I0704 09:08:49.797743       6 log.go:172] (0xc0044904d0) (0xc0011ee1e0) Create stream
I0704 09:08:49.797752       6 log.go:172] (0xc0044904d0) (0xc0011ee1e0) Stream added, broadcasting: 3
I0704 09:08:49.798537       6 log.go:172] (0xc0044904d0) Reply frame received for 3
I0704 09:08:49.798568       6 log.go:172] (0xc0044904d0) (0xc0017561e0) Create stream
I0704 09:08:49.798576       6 log.go:172] (0xc0044904d0) (0xc0017561e0) Stream added, broadcasting: 5
I0704 09:08:49.799312       6 log.go:172] (0xc0044904d0) Reply frame received for 5
I0704 09:08:49.857937       6 log.go:172] (0xc0044904d0) Data frame received for 3
I0704 09:08:49.857987       6 log.go:172] (0xc0011ee1e0) (3) Data frame handling
I0704 09:08:49.858008       6 log.go:172] (0xc0011ee1e0) (3) Data frame sent
I0704 09:08:49.858025       6 log.go:172] (0xc0044904d0) Data frame received for 3
I0704 09:08:49.858172       6 log.go:172] (0xc0011ee1e0) (3) Data frame handling
I0704 09:08:49.858351       6 log.go:172] (0xc0044904d0) Data frame received for 5
I0704 09:08:49.858395       6 log.go:172] (0xc0017561e0) (5) Data frame handling
I0704 09:08:49.859491       6 log.go:172] (0xc0044904d0) Data frame received for 1
I0704 09:08:49.859511       6 log.go:172] (0xc00199c780) (1) Data frame handling
I0704 09:08:49.859521       6 log.go:172] (0xc00199c780) (1) Data frame sent
I0704 09:08:49.859530       6 log.go:172] (0xc0044904d0) (0xc00199c780) Stream removed, broadcasting: 1
I0704 09:08:49.859542       6 log.go:172] (0xc0044904d0) Go away received
I0704 09:08:49.859698       6 log.go:172] (0xc0044904d0) (0xc00199c780) Stream removed, broadcasting: 1
I0704 09:08:49.859721       6 log.go:172] (0xc0044904d0) (0xc0011ee1e0) Stream removed, broadcasting: 3
I0704 09:08:49.859743       6 log.go:172] (0xc0044904d0) (0xc0017561e0) Stream removed, broadcasting: 5
Jul  4 09:08:49.859: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:08:49.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7270" for this suite.

• [SLOW TEST:24.950 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2180,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:08:49.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul  4 09:08:49.918: INFO: Waiting up to 5m0s for pod "pod-982cfa51-da11-4e2f-a8e9-c297aaadf8db" in namespace "emptydir-9623" to be "success or failure"
Jul  4 09:08:49.951: INFO: Pod "pod-982cfa51-da11-4e2f-a8e9-c297aaadf8db": Phase="Pending", Reason="", readiness=false. Elapsed: 33.648999ms
Jul  4 09:08:51.955: INFO: Pod "pod-982cfa51-da11-4e2f-a8e9-c297aaadf8db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036999189s
Jul  4 09:08:53.958: INFO: Pod "pod-982cfa51-da11-4e2f-a8e9-c297aaadf8db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040405688s
STEP: Saw pod success
Jul  4 09:08:53.958: INFO: Pod "pod-982cfa51-da11-4e2f-a8e9-c297aaadf8db" satisfied condition "success or failure"
Jul  4 09:08:53.960: INFO: Trying to get logs from node jerma-worker pod pod-982cfa51-da11-4e2f-a8e9-c297aaadf8db container test-container: 
STEP: delete the pod
Jul  4 09:08:53.987: INFO: Waiting for pod pod-982cfa51-da11-4e2f-a8e9-c297aaadf8db to disappear
Jul  4 09:08:54.000: INFO: Pod pod-982cfa51-da11-4e2f-a8e9-c297aaadf8db no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:08:54.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9623" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2261,"failed":0}
SSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:08:54.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:08:54.089: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jul  4 09:08:59.095: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul  4 09:08:59.095: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jul  4 09:08:59.119: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-6955 /apis/apps/v1/namespaces/deployment-6955/deployments/test-cleanup-deployment 9bc88d11-70bc-4506-b33e-4e1555bc0477 19275 1 2020-07-04 09:08:59 +0000 UTC   map[name:cleanup-pod] map[] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0042a8408  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Jul  4 09:08:59.137: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Jul  4 09:08:59.137: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jul  4 09:08:59.137: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-6955 /apis/apps/v1/namespaces/deployment-6955/replicasets/test-cleanup-controller 79269da9-3ffb-42a6-b804-8ca546789a28 19276 1 2020-07-04 09:08:54 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 9bc88d11-70bc-4506-b33e-4e1555bc0477 0xc0042a8737 0xc0042a8738}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0042a8798  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jul  4 09:08:59.140: INFO: Pod "test-cleanup-controller-7zgp9" is available:
&Pod{ObjectMeta:{test-cleanup-controller-7zgp9 test-cleanup-controller- deployment-6955 /api/v1/namespaces/deployment-6955/pods/test-cleanup-controller-7zgp9 afe91bff-dafe-4a79-aee8-8b334ee8b7a7 19266 0 2020-07-04 09:08:54 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 79269da9-3ffb-42a6-b804-8ca546789a28 0xc00414c737 0xc00414c738}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vnjnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vnjnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vnjnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:08:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:08:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:08:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:08:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.79,StartTime:2020-07-04 09:08:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-04 09:08:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5699c35e0033693a262ee42c7b46c6adcddc308184b32a6d51dd14a629320b20,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.79,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:08:59.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6955" for this suite.

• [SLOW TEST:5.209 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":130,"skipped":2265,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:08:59.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:08:59.359: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-1c41f40e-2234-4da3-bb83-9a2daeb373a2" in namespace "security-context-test-5130" to be "success or failure"
Jul  4 09:08:59.427: INFO: Pod "alpine-nnp-false-1c41f40e-2234-4da3-bb83-9a2daeb373a2": Phase="Pending", Reason="", readiness=false. Elapsed: 67.738871ms
Jul  4 09:09:01.431: INFO: Pod "alpine-nnp-false-1c41f40e-2234-4da3-bb83-9a2daeb373a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071360183s
Jul  4 09:09:03.434: INFO: Pod "alpine-nnp-false-1c41f40e-2234-4da3-bb83-9a2daeb373a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075092918s
Jul  4 09:09:05.510: INFO: Pod "alpine-nnp-false-1c41f40e-2234-4da3-bb83-9a2daeb373a2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150484287s
Jul  4 09:09:07.514: INFO: Pod "alpine-nnp-false-1c41f40e-2234-4da3-bb83-9a2daeb373a2": Phase="Running", Reason="", readiness=true. Elapsed: 8.15496326s
Jul  4 09:09:09.519: INFO: Pod "alpine-nnp-false-1c41f40e-2234-4da3-bb83-9a2daeb373a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.1594868s
Jul  4 09:09:09.519: INFO: Pod "alpine-nnp-false-1c41f40e-2234-4da3-bb83-9a2daeb373a2" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:09:09.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5130" for this suite.

• [SLOW TEST:10.330 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2311,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:09:09.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-aab53c83-5661-4eed-aeb1-5cadc1f0d446
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:09:09.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8304" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":132,"skipped":2339,"failed":0}
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:09:09.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-258
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-258
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-258
Jul  4 09:09:09.796: INFO: Found 0 stateful pods, waiting for 1
Jul  4 09:09:19.953: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jul  4 09:09:19.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-258 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  4 09:09:20.304: INFO: stderr: "I0704 09:09:20.139635    2687 log.go:172] (0xc0000f4bb0) (0xc0006fdae0) Create stream\nI0704 09:09:20.139693    2687 log.go:172] (0xc0000f4bb0) (0xc0006fdae0) Stream added, broadcasting: 1\nI0704 09:09:20.142431    2687 log.go:172] (0xc0000f4bb0) Reply frame received for 1\nI0704 09:09:20.142474    2687 log.go:172] (0xc0000f4bb0) (0xc0006fdcc0) Create stream\nI0704 09:09:20.142490    2687 log.go:172] (0xc0000f4bb0) (0xc0006fdcc0) Stream added, broadcasting: 3\nI0704 09:09:20.143706    2687 log.go:172] (0xc0000f4bb0) Reply frame received for 3\nI0704 09:09:20.143742    2687 log.go:172] (0xc0000f4bb0) (0xc00092e000) Create stream\nI0704 09:09:20.143750    2687 log.go:172] (0xc0000f4bb0) (0xc00092e000) Stream added, broadcasting: 5\nI0704 09:09:20.144784    2687 log.go:172] (0xc0000f4bb0) Reply frame received for 5\nI0704 09:09:20.228849    2687 log.go:172] (0xc0000f4bb0) Data frame received for 5\nI0704 09:09:20.228880    2687 log.go:172] (0xc00092e000) (5) Data frame handling\nI0704 09:09:20.228902    2687 log.go:172] (0xc00092e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0704 09:09:20.298529    2687 log.go:172] (0xc0000f4bb0) Data frame received for 3\nI0704 09:09:20.298576    2687 log.go:172] (0xc0006fdcc0) (3) Data frame handling\nI0704 09:09:20.298623    2687 log.go:172] (0xc0006fdcc0) (3) Data frame sent\nI0704 09:09:20.298914    2687 log.go:172] (0xc0000f4bb0) Data frame received for 3\nI0704 09:09:20.298970    2687 log.go:172] (0xc0006fdcc0) (3) Data frame handling\nI0704 09:09:20.299168    2687 log.go:172] (0xc0000f4bb0) Data frame received for 5\nI0704 09:09:20.299180    2687 log.go:172] (0xc00092e000) (5) Data frame handling\nI0704 09:09:20.300847    2687 log.go:172] (0xc0000f4bb0) Data frame received for 1\nI0704 09:09:20.300860    2687 log.go:172] (0xc0006fdae0) (1) Data frame handling\nI0704 09:09:20.300871    2687 log.go:172] (0xc0006fdae0) (1) Data frame sent\nI0704 09:09:20.301222    2687 log.go:172] (0xc0000f4bb0) (0xc0006fdae0) Stream removed, broadcasting: 1\nI0704 09:09:20.301483    2687 log.go:172] (0xc0000f4bb0) (0xc0006fdae0) Stream removed, broadcasting: 1\nI0704 09:09:20.301497    2687 log.go:172] (0xc0000f4bb0) (0xc0006fdcc0) Stream removed, broadcasting: 3\nI0704 09:09:20.301621    2687 log.go:172] (0xc0000f4bb0) Go away received\nI0704 09:09:20.301685    2687 log.go:172] (0xc0000f4bb0) (0xc00092e000) Stream removed, broadcasting: 5\n"
Jul  4 09:09:20.304: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  4 09:09:20.304: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  4 09:09:20.308: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul  4 09:09:30.312: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  4 09:09:30.312: INFO: Waiting for statefulset status.replicas updated to 0
Jul  4 09:09:30.438: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999782s
Jul  4 09:09:31.443: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.881965593s
Jul  4 09:09:32.447: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.877588953s
Jul  4 09:09:33.452: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.872630456s
Jul  4 09:09:34.487: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.86776426s
Jul  4 09:09:35.492: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.833233439s
Jul  4 09:09:36.496: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.828175759s
Jul  4 09:09:37.500: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.824087255s
Jul  4 09:09:38.504: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.819704084s
Jul  4 09:09:39.508: INFO: Verifying statefulset ss doesn't scale past 1 for another 816.037104ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-258
Jul  4 09:09:40.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-258 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:09:40.742: INFO: stderr: "I0704 09:09:40.641979    2707 log.go:172] (0xc000b42a50) (0xc0007d0000) Create stream\nI0704 09:09:40.642068    2707 log.go:172] (0xc000b42a50) (0xc0007d0000) Stream added, broadcasting: 1\nI0704 09:09:40.644654    2707 log.go:172] (0xc000b42a50) Reply frame received for 1\nI0704 09:09:40.644705    2707 log.go:172] (0xc000b42a50) (0xc00067fb80) Create stream\nI0704 09:09:40.644720    2707 log.go:172] (0xc000b42a50) (0xc00067fb80) Stream added, broadcasting: 3\nI0704 09:09:40.646064    2707 log.go:172] (0xc000b42a50) Reply frame received for 3\nI0704 09:09:40.646115    2707 log.go:172] (0xc000b42a50) (0xc0007d00a0) Create stream\nI0704 09:09:40.646136    2707 log.go:172] (0xc000b42a50) (0xc0007d00a0) Stream added, broadcasting: 5\nI0704 09:09:40.647053    2707 log.go:172] (0xc000b42a50) Reply frame received for 5\nI0704 09:09:40.738191    2707 log.go:172] (0xc000b42a50) Data frame received for 5\nI0704 09:09:40.738227    2707 log.go:172] (0xc0007d00a0) (5) Data frame handling\nI0704 09:09:40.738242    2707 log.go:172] (0xc0007d00a0) (5) Data frame sent\nI0704 09:09:40.738251    2707 log.go:172] (0xc000b42a50) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0704 09:09:40.738267    2707 log.go:172] (0xc000b42a50) Data frame received for 3\nI0704 09:09:40.738292    2707 log.go:172] (0xc00067fb80) (3) Data frame handling\nI0704 09:09:40.738314    2707 log.go:172] (0xc00067fb80) (3) Data frame sent\nI0704 09:09:40.738327    2707 log.go:172] (0xc000b42a50) Data frame received for 3\nI0704 09:09:40.738334    2707 log.go:172] (0xc00067fb80) (3) Data frame handling\nI0704 09:09:40.738363    2707 log.go:172] (0xc0007d00a0) (5) Data frame handling\nI0704 09:09:40.739765    2707 log.go:172] (0xc000b42a50) Data frame received for 1\nI0704 09:09:40.739777    2707 log.go:172] (0xc0007d0000) (1) Data frame handling\nI0704 09:09:40.739784    2707 log.go:172] (0xc0007d0000) (1) Data frame sent\nI0704 09:09:40.739793    2707 log.go:172] (0xc000b42a50) (0xc0007d0000) Stream removed, broadcasting: 1\nI0704 09:09:40.739802    2707 log.go:172] (0xc000b42a50) Go away received\nI0704 09:09:40.740142    2707 log.go:172] (0xc000b42a50) (0xc0007d0000) Stream removed, broadcasting: 1\nI0704 09:09:40.740159    2707 log.go:172] (0xc000b42a50) (0xc00067fb80) Stream removed, broadcasting: 3\nI0704 09:09:40.740168    2707 log.go:172] (0xc000b42a50) (0xc0007d00a0) Stream removed, broadcasting: 5\n"
Jul  4 09:09:40.743: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  4 09:09:40.743: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  4 09:09:40.749: INFO: Found 1 stateful pods, waiting for 3
Jul  4 09:09:50.754: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  4 09:09:50.754: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  4 09:09:50.754: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jul  4 09:09:50.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-258 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  4 09:09:50.947: INFO: stderr: "I0704 09:09:50.873330    2729 log.go:172] (0xc000105b80) (0xc000b040a0) Create stream\nI0704 09:09:50.873394    2729 log.go:172] (0xc000105b80) (0xc000b040a0) Stream added, broadcasting: 1\nI0704 09:09:50.876749    2729 log.go:172] (0xc000105b80) Reply frame received for 1\nI0704 09:09:50.876792    2729 log.go:172] (0xc000105b80) (0xc000b04140) Create stream\nI0704 09:09:50.876807    2729 log.go:172] (0xc000105b80) (0xc000b04140) Stream added, broadcasting: 3\nI0704 09:09:50.877857    2729 log.go:172] (0xc000105b80) Reply frame received for 3\nI0704 09:09:50.877891    2729 log.go:172] (0xc000105b80) (0xc000a380a0) Create stream\nI0704 09:09:50.877902    2729 log.go:172] (0xc000105b80) (0xc000a380a0) Stream added, broadcasting: 5\nI0704 09:09:50.878545    2729 log.go:172] (0xc000105b80) Reply frame received for 5\nI0704 09:09:50.942603    2729 log.go:172] (0xc000105b80) Data frame received for 5\nI0704 09:09:50.942654    2729 log.go:172] (0xc000a380a0) (5) Data frame handling\nI0704 09:09:50.942672    2729 log.go:172] (0xc000a380a0) (5) Data frame sent\nI0704 09:09:50.942680    2729 log.go:172] (0xc000105b80) Data frame received for 5\nI0704 09:09:50.942688    2729 log.go:172] (0xc000a380a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0704 09:09:50.942717    2729 log.go:172] (0xc000105b80) Data frame received for 3\nI0704 09:09:50.942733    2729 log.go:172] (0xc000b04140) (3) Data frame handling\nI0704 09:09:50.942754    2729 log.go:172] (0xc000b04140) (3) Data frame sent\nI0704 09:09:50.942766    2729 log.go:172] (0xc000105b80) Data frame received for 3\nI0704 09:09:50.942781    2729 log.go:172] (0xc000b04140) (3) Data frame handling\nI0704 09:09:50.943965    2729 log.go:172] (0xc000105b80) Data frame received for 1\nI0704 09:09:50.943988    2729 log.go:172] (0xc000b040a0) (1) Data frame handling\nI0704 09:09:50.944001    2729 log.go:172] (0xc000b040a0) (1) Data frame sent\nI0704 09:09:50.944036    2729 log.go:172] (0xc000105b80) (0xc000b040a0) Stream removed, broadcasting: 1\nI0704 09:09:50.944047    2729 log.go:172] (0xc000105b80) Go away received\nI0704 09:09:50.944341    2729 log.go:172] (0xc000105b80) (0xc000b040a0) Stream removed, broadcasting: 1\nI0704 09:09:50.944356    2729 log.go:172] (0xc000105b80) (0xc000b04140) Stream removed, broadcasting: 3\nI0704 09:09:50.944362    2729 log.go:172] (0xc000105b80) (0xc000a380a0) Stream removed, broadcasting: 5\n"
Jul  4 09:09:50.947: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  4 09:09:50.947: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  4 09:09:50.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-258 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  4 09:09:51.180: INFO: stderr: "I0704 09:09:51.076029    2749 log.go:172] (0xc000115290) (0xc0009bc1e0) Create stream\nI0704 09:09:51.076084    2749 log.go:172] (0xc000115290) (0xc0009bc1e0) Stream added, broadcasting: 1\nI0704 09:09:51.078435    2749 log.go:172] (0xc000115290) Reply frame received for 1\nI0704 09:09:51.078464    2749 log.go:172] (0xc000115290) (0xc0005ca6e0) Create stream\nI0704 09:09:51.078472    2749 log.go:172] (0xc000115290) (0xc0005ca6e0) Stream added, broadcasting: 3\nI0704 09:09:51.079243    2749 log.go:172] (0xc000115290) Reply frame received for 3\nI0704 09:09:51.079287    2749 log.go:172] (0xc000115290) (0xc0009bc280) Create stream\nI0704 09:09:51.079298    2749 log.go:172] (0xc000115290) (0xc0009bc280) Stream added, broadcasting: 5\nI0704 09:09:51.080045    2749 log.go:172] (0xc000115290) Reply frame received for 5\nI0704 09:09:51.146345    2749 log.go:172] (0xc000115290) Data frame received for 5\nI0704 09:09:51.146375    2749 log.go:172] (0xc0009bc280) (5) Data frame handling\nI0704 09:09:51.146407    2749 log.go:172] (0xc0009bc280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0704 09:09:51.171789    2749 log.go:172] (0xc000115290) Data frame received for 3\nI0704 09:09:51.171835    2749 log.go:172] (0xc0005ca6e0) (3) Data frame handling\nI0704 09:09:51.171898    2749 log.go:172] (0xc0005ca6e0) (3) Data frame sent\nI0704 09:09:51.172193    2749 log.go:172] (0xc000115290) Data frame received for 5\nI0704 09:09:51.172213    2749 log.go:172] (0xc0009bc280) (5) Data frame handling\nI0704 09:09:51.172260    2749 log.go:172] (0xc000115290) Data frame received for 3\nI0704 09:09:51.172277    2749 log.go:172] (0xc0005ca6e0) (3) Data frame handling\nI0704 09:09:51.174577    2749 log.go:172] (0xc000115290) Data frame received for 1\nI0704 09:09:51.174606    2749 log.go:172] (0xc0009bc1e0) (1) Data frame handling\nI0704 09:09:51.174629    2749 log.go:172] (0xc0009bc1e0) (1) Data frame sent\nI0704 09:09:51.174654    2749 log.go:172] (0xc000115290) (0xc0009bc1e0) Stream removed, broadcasting: 1\nI0704 09:09:51.174674    2749 log.go:172] (0xc000115290) Go away received\nI0704 09:09:51.175161    2749 log.go:172] (0xc000115290) (0xc0009bc1e0) Stream removed, broadcasting: 1\nI0704 09:09:51.175184    2749 log.go:172] (0xc000115290) (0xc0005ca6e0) Stream removed, broadcasting: 3\nI0704 09:09:51.175196    2749 log.go:172] (0xc000115290) (0xc0009bc280) Stream removed, broadcasting: 5\n"
Jul  4 09:09:51.180: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  4 09:09:51.180: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  4 09:09:51.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-258 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  4 09:09:51.451: INFO: stderr: "I0704 09:09:51.343032    2769 log.go:172] (0xc000442dc0) (0xc0006e39a0) Create stream\nI0704 09:09:51.343096    2769 log.go:172] (0xc000442dc0) (0xc0006e39a0) Stream added, broadcasting: 1\nI0704 09:09:51.345602    2769 log.go:172] (0xc000442dc0) Reply frame received for 1\nI0704 09:09:51.345633    2769 log.go:172] (0xc000442dc0) (0xc000960000) Create stream\nI0704 09:09:51.345642    2769 log.go:172] (0xc000442dc0) (0xc000960000) Stream added, broadcasting: 3\nI0704 09:09:51.346612    2769 log.go:172] (0xc000442dc0) Reply frame received for 3\nI0704 09:09:51.346682    2769 log.go:172] (0xc000442dc0) (0xc000270000) Create stream\nI0704 09:09:51.346708    2769 log.go:172] (0xc000442dc0) (0xc000270000) Stream added, broadcasting: 5\nI0704 09:09:51.347599    2769 log.go:172] (0xc000442dc0) Reply frame received for 5\nI0704 09:09:51.400887    2769 log.go:172] (0xc000442dc0) Data frame received for 5\nI0704 09:09:51.400914    2769 log.go:172] (0xc000270000) (5) Data frame handling\nI0704 09:09:51.400933    2769 log.go:172] (0xc000270000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0704 09:09:51.444051    2769 log.go:172] (0xc000442dc0) Data frame received for 3\nI0704 09:09:51.444096    2769 log.go:172] (0xc000960000) (3) Data frame handling\nI0704 09:09:51.444167    2769 log.go:172] (0xc000960000) (3) Data frame sent\nI0704 09:09:51.444191    2769 log.go:172] (0xc000442dc0) Data frame received for 3\nI0704 09:09:51.444207    2769 log.go:172] (0xc000960000) (3) Data frame handling\nI0704 09:09:51.444440    2769 log.go:172] (0xc000442dc0) Data frame received for 5\nI0704 09:09:51.444472    2769 log.go:172] (0xc000270000) (5) Data frame handling\nI0704 09:09:51.446682    2769 log.go:172] (0xc000442dc0) Data frame received for 1\nI0704 09:09:51.446717    2769 log.go:172] (0xc0006e39a0) (1) Data frame handling\nI0704 09:09:51.446741    2769 log.go:172] (0xc0006e39a0) (1) Data frame sent\nI0704 09:09:51.446766    2769 log.go:172] (0xc000442dc0) (0xc0006e39a0) Stream removed, broadcasting: 1\nI0704 09:09:51.446799    2769 log.go:172] (0xc000442dc0) Go away received\nI0704 09:09:51.447268    2769 log.go:172] (0xc000442dc0) (0xc0006e39a0) Stream removed, broadcasting: 1\nI0704 09:09:51.447295    2769 log.go:172] (0xc000442dc0) (0xc000960000) Stream removed, broadcasting: 3\nI0704 09:09:51.447307    2769 log.go:172] (0xc000442dc0) (0xc000270000) Stream removed, broadcasting: 5\n"
Jul  4 09:09:51.452: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  4 09:09:51.452: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  4 09:09:51.452: INFO: Waiting for statefulset status.replicas updated to 0
Jul  4 09:09:51.462: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jul  4 09:10:01.483: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  4 09:10:01.483: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul  4 09:10:01.483: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul  4 09:10:01.494: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999699s
Jul  4 09:10:02.499: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995676227s
Jul  4 09:10:03.535: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.99122186s
Jul  4 09:10:04.558: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.955096903s
Jul  4 09:10:05.564: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.931496607s
Jul  4 09:10:06.606: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.926114674s
Jul  4 09:10:07.611: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.883343192s
Jul  4 09:10:08.616: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.87928663s
Jul  4 09:10:09.621: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.874197084s
Jul  4 09:10:10.709: INFO: Verifying statefulset ss doesn't scale past 3 for another 868.995141ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-258
Jul  4 09:10:11.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-258 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:10:11.961: INFO: stderr: "I0704 09:10:11.892419    2790 log.go:172] (0xc0004dadc0) (0xc000bbc000) Create stream\nI0704 09:10:11.892483    2790 log.go:172] (0xc0004dadc0) (0xc000bbc000) Stream added, broadcasting: 1\nI0704 09:10:11.894940    2790 log.go:172] (0xc0004dadc0) Reply frame received for 1\nI0704 09:10:11.894978    2790 log.go:172] (0xc0004dadc0) (0xc000a2e000) Create stream\nI0704 09:10:11.894990    2790 log.go:172] (0xc0004dadc0) (0xc000a2e000) Stream added, broadcasting: 3\nI0704 09:10:11.895874    2790 log.go:172] (0xc0004dadc0) Reply frame received for 3\nI0704 09:10:11.895913    2790 log.go:172] (0xc0004dadc0) (0xc00065da40) Create stream\nI0704 09:10:11.895924    2790 log.go:172] (0xc0004dadc0) (0xc00065da40) Stream added, broadcasting: 5\nI0704 09:10:11.896778    2790 log.go:172] (0xc0004dadc0) Reply frame received for 5\nI0704 09:10:11.955432    2790 log.go:172] (0xc0004dadc0) Data frame received for 5\nI0704 09:10:11.955468    2790 log.go:172] (0xc00065da40) (5) Data frame handling\nI0704 09:10:11.955486    2790 log.go:172] (0xc00065da40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0704 09:10:11.955551    2790 log.go:172] (0xc0004dadc0) Data frame received for 5\nI0704 09:10:11.955569    2790 log.go:172] (0xc00065da40) (5) Data frame handling\nI0704 09:10:11.955593    2790 log.go:172] (0xc0004dadc0) Data frame received for 3\nI0704 09:10:11.955609    2790 log.go:172] (0xc000a2e000) (3) Data frame handling\nI0704 09:10:11.955625    2790 log.go:172] (0xc000a2e000) (3) Data frame sent\nI0704 09:10:11.955647    2790 log.go:172] (0xc0004dadc0) Data frame received for 3\nI0704 09:10:11.955657    2790 log.go:172] (0xc000a2e000) (3) Data frame handling\nI0704 09:10:11.957287    2790 log.go:172] (0xc0004dadc0) Data frame received for 1\nI0704 09:10:11.957368    2790 log.go:172] (0xc000bbc000) (1) Data frame handling\nI0704 09:10:11.957393    2790 log.go:172] (0xc000bbc000) (1) Data frame sent\nI0704 09:10:11.957414    2790 log.go:172] (0xc0004dadc0) (0xc000bbc000) Stream removed, broadcasting: 1\nI0704 09:10:11.957433    2790 log.go:172] (0xc0004dadc0) Go away received\nI0704 09:10:11.957771    2790 log.go:172] (0xc0004dadc0) (0xc000bbc000) Stream removed, broadcasting: 1\nI0704 09:10:11.957787    2790 log.go:172] (0xc0004dadc0) (0xc000a2e000) Stream removed, broadcasting: 3\nI0704 09:10:11.957796    2790 log.go:172] (0xc0004dadc0) (0xc00065da40) Stream removed, broadcasting: 5\n"
Jul  4 09:10:11.961: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  4 09:10:11.961: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  4 09:10:11.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:10:12.155: INFO: stderr: "I0704 09:10:12.088277    2813 log.go:172] (0xc0009bf130) (0xc000a1c3c0) Create stream\nI0704 09:10:12.088333    2813 log.go:172] (0xc0009bf130) (0xc000a1c3c0) Stream added, broadcasting: 1\nI0704 09:10:12.091103    2813 log.go:172] (0xc0009bf130) Reply frame received for 1\nI0704 09:10:12.091149    2813 log.go:172] (0xc0009bf130) (0xc000a1c460) Create stream\nI0704 09:10:12.091162    2813 log.go:172] (0xc0009bf130) (0xc000a1c460) Stream added, broadcasting: 3\nI0704 09:10:12.092828    2813 log.go:172] (0xc0009bf130) Reply frame received for 3\nI0704 09:10:12.092880    2813 log.go:172] (0xc0009bf130) (0xc000847cc0) Create stream\nI0704 09:10:12.092904    2813 log.go:172] (0xc0009bf130) (0xc000847cc0) Stream added, broadcasting: 5\nI0704 09:10:12.094059    2813 log.go:172] (0xc0009bf130) Reply frame received for 5\nI0704 09:10:12.148796    2813 log.go:172] (0xc0009bf130) Data frame received for 5\nI0704 09:10:12.148835    2813 log.go:172] (0xc000847cc0) (5) Data frame handling\nI0704 09:10:12.148853    2813 log.go:172] (0xc000847cc0) (5) Data frame sent\nI0704 09:10:12.148870    2813 log.go:172] (0xc0009bf130) Data frame received for 5\nI0704 09:10:12.148880    2813 log.go:172] (0xc000847cc0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0704 09:10:12.148925    2813 log.go:172] (0xc0009bf130) Data frame received for 3\nI0704 09:10:12.148938    2813 log.go:172] (0xc000a1c460) (3) Data frame handling\nI0704 09:10:12.148949    2813 log.go:172] (0xc000a1c460) (3) Data frame sent\nI0704 09:10:12.148961    2813 log.go:172] (0xc0009bf130) Data frame received for 3\nI0704 09:10:12.148980    2813 log.go:172] (0xc000a1c460) (3) Data frame handling\nI0704 09:10:12.150854    2813 log.go:172] (0xc0009bf130) Data frame received for 1\nI0704 09:10:12.150891    2813 log.go:172] (0xc000a1c3c0) (1) Data frame handling\nI0704 09:10:12.150914    2813 log.go:172] (0xc000a1c3c0) (1) Data frame sent\nI0704 09:10:12.150933    2813 log.go:172] (0xc0009bf130) (0xc000a1c3c0) Stream removed, broadcasting: 1\nI0704 09:10:12.150969    2813 log.go:172] (0xc0009bf130) Go away received\nI0704 09:10:12.151320    2813 log.go:172] (0xc0009bf130) (0xc000a1c3c0) Stream removed, broadcasting: 1\nI0704 09:10:12.151341    2813 log.go:172] (0xc0009bf130) (0xc000a1c460) Stream removed, broadcasting: 3\nI0704 09:10:12.151351    2813 log.go:172] (0xc0009bf130) (0xc000847cc0) Stream removed, broadcasting: 5\n"
Jul  4 09:10:12.155: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  4 09:10:12.155: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  4 09:10:12.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-258 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:10:12.373: INFO: stderr: "I0704 09:10:12.289475    2835 log.go:172] (0xc0000f56b0) (0xc000629a40) Create stream\nI0704 09:10:12.289550    2835 log.go:172] (0xc0000f56b0) (0xc000629a40) Stream added, broadcasting: 1\nI0704 09:10:12.292367    2835 log.go:172] (0xc0000f56b0) Reply frame received for 1\nI0704 09:10:12.292433    2835 log.go:172] (0xc0000f56b0) (0xc000ac0000) Create stream\nI0704 09:10:12.292483    2835 log.go:172] (0xc0000f56b0) (0xc000ac0000) Stream added, broadcasting: 3\nI0704 09:10:12.293815    2835 log.go:172] (0xc0000f56b0) Reply frame received for 3\nI0704 09:10:12.293857    2835 log.go:172] (0xc0000f56b0) (0xc000a42000) Create stream\nI0704 09:10:12.293873    2835 log.go:172] (0xc0000f56b0) (0xc000a42000) Stream added, broadcasting: 5\nI0704 09:10:12.294811    2835 log.go:172] (0xc0000f56b0) Reply frame received for 5\nI0704 09:10:12.366372    2835 log.go:172] (0xc0000f56b0) Data frame received for 5\nI0704 09:10:12.366422    2835 log.go:172] (0xc0000f56b0) Data frame received for 3\nI0704 09:10:12.366457    2835 log.go:172] (0xc000ac0000) (3) Data frame handling\nI0704 09:10:12.366477    2835 log.go:172] (0xc000ac0000) (3) Data frame sent\nI0704 09:10:12.366498    2835 log.go:172] (0xc0000f56b0) Data frame received for 3\nI0704 09:10:12.366515    2835 log.go:172] (0xc000ac0000) (3) Data frame handling\nI0704 09:10:12.366532    2835 log.go:172] (0xc000a42000) (5) Data frame handling\nI0704 09:10:12.366546    2835 log.go:172] (0xc000a42000) (5) Data frame sent\nI0704 09:10:12.366563    2835 log.go:172] (0xc0000f56b0) Data frame received for 5\nI0704 09:10:12.366581    2835 log.go:172] (0xc000a42000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0704 09:10:12.367825    2835 log.go:172] (0xc0000f56b0) Data frame received for 1\nI0704 09:10:12.367864    2835 log.go:172] (0xc000629a40) (1) Data frame handling\nI0704 09:10:12.367893    2835 log.go:172] (0xc000629a40) (1) Data frame sent\nI0704 09:10:12.367930    2835 log.go:172] (0xc0000f56b0) (0xc000629a40) Stream removed, broadcasting: 1\nI0704 09:10:12.367964    2835 log.go:172] (0xc0000f56b0) Go away received\nI0704 09:10:12.368355    2835 log.go:172] (0xc0000f56b0) (0xc000629a40) Stream removed, broadcasting: 1\nI0704 09:10:12.368379    2835 log.go:172] (0xc0000f56b0) (0xc000ac0000) Stream removed, broadcasting: 3\nI0704 09:10:12.368393    2835 log.go:172] (0xc0000f56b0) (0xc000a42000) Stream removed, broadcasting: 5\n"
Jul  4 09:10:12.373: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  4 09:10:12.373: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  4 09:10:12.373: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jul  4 09:10:42.415: INFO: Deleting all statefulset in ns statefulset-258
Jul  4 09:10:42.419: INFO: Scaling statefulset ss to 0
Jul  4 09:10:42.428: INFO: Waiting for statefulset status.replicas updated to 0
Jul  4 09:10:42.430: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:10:42.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-258" for this suite.

• [SLOW TEST:92.835 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":133,"skipped":2342,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:10:42.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:10:42.530: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:10:43.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3482" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":134,"skipped":2370,"failed":0}
SSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:10:43.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-02fa8042-de5c-42e7-8078-23e420754a20
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:10:43.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9845" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":135,"skipped":2379,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:10:43.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-693
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-693
STEP: creating replication controller externalsvc in namespace services-693
I0704 09:10:44.091591       6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-693, replica count: 2
I0704 09:10:47.142077       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0704 09:10:50.142339       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Jul  4 09:10:50.820: INFO: Creating new exec pod
Jul  4 09:10:54.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-693 execpodqv7hl -- /bin/sh -x -c nslookup nodeport-service'
Jul  4 09:10:55.127: INFO: stderr: "I0704 09:10:55.004698    2858 log.go:172] (0xc0001194a0) (0xc0009dc0a0) Create stream\nI0704 09:10:55.004754    2858 log.go:172] (0xc0001194a0) (0xc0009dc0a0) Stream added, broadcasting: 1\nI0704 09:10:55.007369    2858 log.go:172] (0xc0001194a0) Reply frame received for 1\nI0704 09:10:55.007426    2858 log.go:172] (0xc0001194a0) (0xc00068bb80) Create stream\nI0704 09:10:55.007448    2858 log.go:172] (0xc0001194a0) (0xc00068bb80) Stream added, broadcasting: 3\nI0704 09:10:55.008458    2858 log.go:172] (0xc0001194a0) Reply frame received for 3\nI0704 09:10:55.008498    2858 log.go:172] (0xc0001194a0) (0xc00026a000) Create stream\nI0704 09:10:55.008526    2858 log.go:172] (0xc0001194a0) (0xc00026a000) Stream added, broadcasting: 5\nI0704 09:10:55.009624    2858 log.go:172] (0xc0001194a0) Reply frame received for 5\nI0704 09:10:55.108802    2858 log.go:172] (0xc0001194a0) Data frame received for 5\nI0704 09:10:55.108831    2858 log.go:172] (0xc00026a000) (5) Data frame handling\nI0704 09:10:55.108849    2858 log.go:172] (0xc00026a000) (5) Data frame sent\n+ nslookup nodeport-service\nI0704 09:10:55.118282    2858 log.go:172] (0xc0001194a0) Data frame received for 3\nI0704 09:10:55.118309    2858 log.go:172] (0xc00068bb80) (3) Data frame handling\nI0704 09:10:55.118329    2858 log.go:172] (0xc00068bb80) (3) Data frame sent\nI0704 09:10:55.119597    2858 log.go:172] (0xc0001194a0) Data frame received for 3\nI0704 09:10:55.119614    2858 log.go:172] (0xc00068bb80) (3) Data frame handling\nI0704 09:10:55.119627    2858 log.go:172] (0xc00068bb80) (3) Data frame sent\nI0704 09:10:55.120164    2858 log.go:172] (0xc0001194a0) Data frame received for 3\nI0704 09:10:55.120182    2858 log.go:172] (0xc00068bb80) (3) Data frame handling\nI0704 09:10:55.120221    2858 log.go:172] (0xc0001194a0) Data frame received for 5\nI0704 09:10:55.120260    2858 log.go:172] (0xc00026a000) (5) Data frame handling\nI0704 09:10:55.122616    2858 log.go:172] (0xc0001194a0) Data frame received for 1\nI0704 09:10:55.122634    2858 log.go:172] (0xc0009dc0a0) (1) Data frame handling\nI0704 09:10:55.122645    2858 log.go:172] (0xc0009dc0a0) (1) Data frame sent\nI0704 09:10:55.122656    2858 log.go:172] (0xc0001194a0) (0xc0009dc0a0) Stream removed, broadcasting: 1\nI0704 09:10:55.122684    2858 log.go:172] (0xc0001194a0) Go away received\nI0704 09:10:55.123002    2858 log.go:172] (0xc0001194a0) (0xc0009dc0a0) Stream removed, broadcasting: 1\nI0704 09:10:55.123018    2858 log.go:172] (0xc0001194a0) (0xc00068bb80) Stream removed, broadcasting: 3\nI0704 09:10:55.123028    2858 log.go:172] (0xc0001194a0) (0xc00026a000) Stream removed, broadcasting: 5\n"
Jul  4 09:10:55.127: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-693.svc.cluster.local\tcanonical name = externalsvc.services-693.svc.cluster.local.\nName:\texternalsvc.services-693.svc.cluster.local\nAddress: 10.98.12.197\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-693, will wait for the garbage collector to delete the pods
Jul  4 09:10:55.187: INFO: Deleting ReplicationController externalsvc took: 6.291358ms
Jul  4 09:10:55.487: INFO: Terminating ReplicationController externalsvc pods took: 300.378644ms
Jul  4 09:11:06.813: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:11:06.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-693" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:23.025 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":136,"skipped":2393,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:11:06.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:11:06.929: INFO: Creating deployment "webserver-deployment"
Jul  4 09:11:06.932: INFO: Waiting for observed generation 1
Jul  4 09:11:09.350: INFO: Waiting for all required pods to come up
Jul  4 09:11:09.524: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Jul  4 09:11:21.532: INFO: Waiting for deployment "webserver-deployment" to complete
Jul  4 09:11:21.538: INFO: Updating deployment "webserver-deployment" with a non-existent image
Jul  4 09:11:21.544: INFO: Updating deployment webserver-deployment
Jul  4 09:11:21.544: INFO: Waiting for observed generation 2
Jul  4 09:11:24.422: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jul  4 09:11:24.502: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jul  4 09:11:24.668: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jul  4 09:11:24.697: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jul  4 09:11:24.697: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jul  4 09:11:24.700: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jul  4 09:11:24.703: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Jul  4 09:11:24.703: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Jul  4 09:11:24.940: INFO: Updating deployment webserver-deployment
Jul  4 09:11:24.940: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Jul  4 09:11:25.389: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jul  4 09:11:25.443: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jul  4 09:11:26.075: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-2269 /apis/apps/v1/namespaces/deployment-2269/deployments/webserver-deployment ae983867-acf4-44c8-ab07-b68363c9e85a 20277 3 2020-07-04 09:11:06 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0044f6308  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-07-04 09:11:23 +0000 UTC,LastTransitionTime:2020-07-04 09:11:06 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-04 09:11:25 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Jul  4 09:11:26.139: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-2269 /apis/apps/v1/namespaces/deployment-2269/replicasets/webserver-deployment-c7997dcc8 6254e256-55f7-449d-ab69-5296753155ac 20315 3 2020-07-04 09:11:21 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment ae983867-acf4-44c8-ab07-b68363c9e85a 0xc0044f67d7 0xc0044f67d8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0044f6848  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul  4 09:11:26.139: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Jul  4 09:11:26.139: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-2269 /apis/apps/v1/namespaces/deployment-2269/replicasets/webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 20316 3 2020-07-04 09:11:06 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment ae983867-acf4-44c8-ab07-b68363c9e85a 0xc0044f6717 0xc0044f6718}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0044f6778  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Jul  4 09:11:26.247: INFO: Pod "webserver-deployment-595b5b9587-58tcs" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-58tcs webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-58tcs 910f77c0-55b0-44fc-9883-63bb107cfbb3 20107 0 2020-07-04 09:11:06 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc0044f6ce7 0xc0044f6ce8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.83,StartTime:2020-07-04 09:11:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-04 09:11:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a944ec684b518565932d4b48f2a30545fa4917eb7d36cf11ee5ce691bf1b9558,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.83,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.247: INFO: Pod "webserver-deployment-595b5b9587-692dw" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-692dw webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-692dw c44daded-1bf4-47cc-9e36-068ed3d8ab65 20313 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc0044f6e60 0xc0044f6e61}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.248: INFO: Pod "webserver-deployment-595b5b9587-6mnhz" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6mnhz webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-6mnhz 49dfd8f8-72f1-4ecc-a043-8d7a300a409f 20164 0 2020-07-04 09:11:06 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc0044f6f70 0xc0044f6f71}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.96,StartTime:2020-07-04 09:11:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-04 09:11:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a874a9686dfcb7feee507eabccfbf91447906024972ef6a385555e020a814ac2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.96,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.248: INFO: Pod "webserver-deployment-595b5b9587-85m8z" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-85m8z webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-85m8z d9988300-1aa1-4798-8234-c6d6b0f891c8 20324 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc0044f70e0 0xc0044f70e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-07-04 09:11:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.248: INFO: Pod "webserver-deployment-595b5b9587-99wjv" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-99wjv webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-99wjv 23eec72c-6785-49ae-a153-34111c74f837 20317 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc0044f7237 0xc0044f7238}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.249: INFO: Pod "webserver-deployment-595b5b9587-9jv69" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9jv69 webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-9jv69 c6cb0353-8b6a-4729-b62a-d6ce1fa804c3 20179 0 2020-07-04 09:11:06 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc0044f7350 0xc0044f7351}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.97,StartTime:2020-07-04 09:11:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-04 09:11:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7d54b925e90de3f6c662d17ce64125cbf0b1d89124f0105d193875982d36e825,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.97,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.249: INFO: Pod "webserver-deployment-595b5b9587-9rp5b" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9rp5b webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-9rp5b 4235139a-4747-44e7-b1f4-5b327819eec5 20279 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc0044f74c0 0xc0044f74c1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.249: INFO: Pod "webserver-deployment-595b5b9587-9s887" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9s887 webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-9s887 3c1f6193-55c7-4a83-9d70-904bd5fdee5d 20193 0 2020-07-04 09:11:07 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc0044f75e0 0xc0044f75e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.99,StartTime:2020-07-04 09:11:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-04 09:11:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://556285e13c1aa3dc6e43664a655674f36b9a30c2374d5c389cfd171e96cf9946,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.99,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.250: INFO: Pod "webserver-deployment-595b5b9587-9vz9n" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9vz9n webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-9vz9n b6ae3f46-c24c-4afc-b48d-9b83d978687b 20175 0 2020-07-04 09:11:06 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc0044f7750 0xc0044f7751}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.86,StartTime:2020-07-04 09:11:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-04 09:11:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://69e408c4345e239ff8df5402bf8e3acb85e437ec12683ce871cb32b0cc1f32c7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.86,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.250: INFO: Pod "webserver-deployment-595b5b9587-bk2xm" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bk2xm webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-bk2xm 0c52c28b-144b-4abf-b229-67362c3311e9 20299 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc0044f78c0 0xc0044f78c1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.250: INFO: Pod "webserver-deployment-595b5b9587-dpzjp" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-dpzjp webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-dpzjp d0f18c95-946e-4fbe-af06-262f00e67148 20145 0 2020-07-04 09:11:06 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc0044f79d0 0xc0044f79d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.84,StartTime:2020-07-04 09:11:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-04 09:11:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://842512ab4f3583773424a1947099829db5d4c3fd760baf0c307666fcdbe15ab4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.84,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.250: INFO: Pod "webserver-deployment-595b5b9587-gghsb" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gghsb webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-gghsb 04d95bac-a134-4fce-8515-69d51ad86eb0 20184 0 2020-07-04 09:11:07 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc0044f7b40 0xc0044f7b41}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.98,StartTime:2020-07-04 09:11:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-04 09:11:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6b35b2a528acf210c02e64ca42ff79ec57be47002027595f1ce7a08d12d0268b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.98,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.250: INFO: Pod "webserver-deployment-595b5b9587-jrtfb" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-jrtfb webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-jrtfb aadd7d86-0b86-4846-8f06-1e0031cab5ff 20123 0 2020-07-04 09:11:06 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc0044f7cb0 0xc0044f7cb1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.95,StartTime:2020-07-04 09:11:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-04 09:11:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://00f096b4c189c33ff58f6a70d73f3493781cf80752053a81b74eafb31e3a9b5c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.95,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.251: INFO: Pod "webserver-deployment-595b5b9587-rk5xn" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rk5xn webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-rk5xn 3abd3e0c-fb4c-447f-8aac-b9aee9af38d8 20295 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc0044f7e20 0xc0044f7e21}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.251: INFO: Pod "webserver-deployment-595b5b9587-sb7km" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-sb7km webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-sb7km bc5ef8cd-5db1-421a-b439-12e56190fe55 20314 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc0044f7f30 0xc0044f7f31}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.251: INFO: Pod "webserver-deployment-595b5b9587-skt4p" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-skt4p webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-skt4p ba7cc30a-9c8f-4dab-8c51-7cb4b911677b 20289 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc004eee040 0xc004eee041}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.251: INFO: Pod "webserver-deployment-595b5b9587-snm89" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-snm89 webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-snm89 1911383f-e5a7-4412-8188-ac0c1a1ea0c3 20296 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc004eee160 0xc004eee161}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.251: INFO: Pod "webserver-deployment-595b5b9587-tlvfc" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-tlvfc webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-tlvfc f22e0944-9191-4761-99e2-fe9a0fb9463b 20312 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc004eee280 0xc004eee281}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.252: INFO: Pod "webserver-deployment-595b5b9587-txsdh" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-txsdh webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-txsdh 15f147c3-436b-44b6-bb09-3336af7984ec 20298 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc004eee390 0xc004eee391}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.252: INFO: Pod "webserver-deployment-595b5b9587-x7f8r" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-x7f8r webserver-deployment-595b5b9587- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-595b5b9587-x7f8r dc8d2c5d-8d3e-4ef1-8fa1-06550984fd83 20319 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c126c43-9776-463e-a5cd-23b7a58bd0c8 0xc004eee4a0 0xc004eee4a1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.252: INFO: Pod "webserver-deployment-c7997dcc8-4dx6l" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4dx6l webserver-deployment-c7997dcc8- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-c7997dcc8-4dx6l bb2a5ea0-371d-4a42-9259-101a2c31235a 20232 0 2020-07-04 09:11:21 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6254e256-55f7-449d-ab69-5296753155ac 0xc004eee5b0 0xc004eee5b1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-04 09:11:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.252: INFO: Pod "webserver-deployment-c7997dcc8-5nshh" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5nshh webserver-deployment-c7997dcc8- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-c7997dcc8-5nshh e5f0e45c-bc2a-4a3a-8828-5daef11b0d92 20304 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6254e256-55f7-449d-ab69-5296753155ac 0xc004eee720 0xc004eee721}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.253: INFO: Pod "webserver-deployment-c7997dcc8-9prbb" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9prbb webserver-deployment-c7997dcc8- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-c7997dcc8-9prbb 31a1167f-039f-4603-a1d3-eaad00d4b131 20307 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6254e256-55f7-449d-ab69-5296753155ac 0xc004eee840 0xc004eee841}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.253: INFO: Pod "webserver-deployment-c7997dcc8-fpmnp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fpmnp webserver-deployment-c7997dcc8- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-c7997dcc8-fpmnp 6c87a4da-18d7-4cd0-aacf-6630e6499f54 20321 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6254e256-55f7-449d-ab69-5296753155ac 0xc004eee960 0xc004eee961}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-04 09:11:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.253: INFO: Pod "webserver-deployment-c7997dcc8-g5wqp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-g5wqp webserver-deployment-c7997dcc8- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-c7997dcc8-g5wqp 576d30c3-a8e0-4c02-b4a4-28ccb0ba6739 20318 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6254e256-55f7-449d-ab69-5296753155ac 0xc004eeead0 0xc004eeead1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.253: INFO: Pod "webserver-deployment-c7997dcc8-gx74m" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gx74m webserver-deployment-c7997dcc8- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-c7997dcc8-gx74m 8fb75d74-6e63-44f1-93c0-7cb774d317f6 20286 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6254e256-55f7-449d-ab69-5296753155ac 0xc004eeebf0 0xc004eeebf1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.253: INFO: Pod "webserver-deployment-c7997dcc8-jmclw" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jmclw webserver-deployment-c7997dcc8- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-c7997dcc8-jmclw b05935c5-7e3f-45e6-94c7-61e3b107efb4 20292 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6254e256-55f7-449d-ab69-5296753155ac 0xc004eeed10 0xc004eeed11}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.254: INFO: Pod "webserver-deployment-c7997dcc8-jzfl7" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jzfl7 webserver-deployment-c7997dcc8- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-c7997dcc8-jzfl7 94a317fa-0de3-4096-83aa-b0c9270369d4 20258 0 2020-07-04 09:11:22 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6254e256-55f7-449d-ab69-5296753155ac 0xc004eeee40 0xc004eeee41}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-07-04 09:11:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.254: INFO: Pod "webserver-deployment-c7997dcc8-nhzdk" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nhzdk webserver-deployment-c7997dcc8- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-c7997dcc8-nhzdk 39a405c0-2411-49a1-8154-90bf3fa094b3 20259 0 2020-07-04 09:11:22 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6254e256-55f7-449d-ab69-5296753155ac 0xc004eeefc0 0xc004eeefc1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-07-04 09:11:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.254: INFO: Pod "webserver-deployment-c7997dcc8-nqd5m" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nqd5m webserver-deployment-c7997dcc8- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-c7997dcc8-nqd5m a9079855-3365-40fd-a589-fb4ff6471256 20230 0 2020-07-04 09:11:21 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6254e256-55f7-449d-ab69-5296753155ac 0xc004eef150 0xc004eef151}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-07-04 09:11:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.254: INFO: Pod "webserver-deployment-c7997dcc8-v5w84" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v5w84 webserver-deployment-c7997dcc8- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-c7997dcc8-v5w84 a9bc41cb-31b6-46b3-bfec-c8a22643c162 20248 0 2020-07-04 09:11:21 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6254e256-55f7-449d-ab69-5296753155ac 0xc004eef2c0 0xc004eef2c1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-04 09:11:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.254: INFO: Pod "webserver-deployment-c7997dcc8-wqchv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wqchv webserver-deployment-c7997dcc8- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-c7997dcc8-wqchv 9bc3b223-c656-4780-9a92-a230cf27056e 20308 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6254e256-55f7-449d-ab69-5296753155ac 0xc004eef430 0xc004eef431}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  4 09:11:26.254: INFO: Pod "webserver-deployment-c7997dcc8-xfkwp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xfkwp webserver-deployment-c7997dcc8- deployment-2269 /api/v1/namespaces/deployment-2269/pods/webserver-deployment-c7997dcc8-xfkwp b681c8fb-b967-47ef-88e8-f643de9c1c29 20309 0 2020-07-04 09:11:25 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6254e256-55f7-449d-ab69-5296753155ac 0xc004eef550 0xc004eef551}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4vms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4vms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:11:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:11:26.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2269" for this suite.

• [SLOW TEST:19.540 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":137,"skipped":2400,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:11:26.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  4 09:11:28.540: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  4 09:11:30.551: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450689, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:11:32.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450689, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:11:37.176: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450689, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:11:38.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450689, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:11:40.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450689, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:11:42.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450689, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:11:44.661: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450689, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:11:46.848: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450689, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:11:48.816: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450689, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:11:50.557: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450689, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450688, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  4 09:11:53.818: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Jul  4 09:11:58.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-5506 to-be-attached-pod -i -c=container1'
Jul  4 09:11:58.137: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:11:58.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5506" for this suite.
STEP: Destroying namespace "webhook-5506-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:32.198 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":138,"skipped":2406,"failed":0}
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:11:58.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:12:16.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5776" for this suite.

• [SLOW TEST:18.178 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":139,"skipped":2406,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:12:16.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Jul  4 09:12:17.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jul  4 09:12:39.153: INFO: stderr: ""
Jul  4 09:12:39.153: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32777\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32777/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:12:39.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5936" for this suite.

• [SLOW TEST:22.383 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1021
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":140,"skipped":2429,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:12:39.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  4 09:12:43.598: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  4 09:12:45.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450763, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450763, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450764, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450762, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:12:47.692: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450763, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450763, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450764, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450762, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:12:49.682: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450763, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450763, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450764, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450762, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:12:52.022: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450763, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450763, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450764, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450762, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:12:53.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450763, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450763, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450764, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450762, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  4 09:12:57.390: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:13:13.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1672" for this suite.
STEP: Destroying namespace "webhook-1672-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:35.128 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":141,"skipped":2435,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:13:14.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  4 09:13:16.303: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  4 09:13:18.739: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450796, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450796, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450796, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450796, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:13:20.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450796, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450796, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450796, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450796, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  4 09:13:23.781: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:13:23.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7291-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:13:25.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3868" for this suite.
STEP: Destroying namespace "webhook-3868-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.512 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":142,"skipped":2455,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:13:25.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Jul  4 09:13:25.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:13:42.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7294" for this suite.

• [SLOW TEST:16.544 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":143,"skipped":2456,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:13:42.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul  4 09:13:46.512: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:13:46.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2255" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2467,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:13:46.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:13:46.627: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jul  4 09:13:48.671: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:13:49.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2365" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":145,"skipped":2489,"failed":0}
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:13:49.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jul  4 09:13:50.058: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  4 09:13:50.087: INFO: Waiting for terminating namespaces to be deleted...
Jul  4 09:13:50.090: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Jul  4 09:13:50.107: INFO: kube-proxy-8sp85 from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  4 09:13:50.107: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  4 09:13:50.107: INFO: condition-test-bpkjd from replication-controller-2365 started at 2020-07-04 09:13:47 +0000 UTC (1 container statuses recorded)
Jul  4 09:13:50.107: INFO: 	Container httpd ready: false, restart count 0
Jul  4 09:13:50.107: INFO: kindnet-gnxwn from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  4 09:13:50.107: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  4 09:13:50.107: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Jul  4 09:13:50.126: INFO: kube-proxy-b2ncl from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  4 09:13:50.126: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  4 09:13:50.126: INFO: condition-test-p2h5r from replication-controller-2365 started at 2020-07-04 09:13:47 +0000 UTC (1 container statuses recorded)
Jul  4 09:13:50.126: INFO: 	Container httpd ready: false, restart count 0
Jul  4 09:13:50.126: INFO: kindnet-qg8qr from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  4 09:13:50.126: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-92e73b8a-ca59-4584-903a-3d40f92c0f3d 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-92e73b8a-ca59-4584-903a-3d40f92c0f3d off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-92e73b8a-ca59-4584-903a-3d40f92c0f3d
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:14:01.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-726" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:11.445 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":146,"skipped":2494,"failed":0}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:14:01.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-9facc76e-0796-4529-8de6-6f9dd2ff044a
STEP: Creating a pod to test consume configMaps
Jul  4 09:14:01.299: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e33cd90f-f2a8-43d7-b80d-7a0f770a73e6" in namespace "projected-7502" to be "success or failure"
Jul  4 09:14:01.309: INFO: Pod "pod-projected-configmaps-e33cd90f-f2a8-43d7-b80d-7a0f770a73e6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.977866ms
Jul  4 09:14:03.313: INFO: Pod "pod-projected-configmaps-e33cd90f-f2a8-43d7-b80d-7a0f770a73e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0137264s
Jul  4 09:14:05.591: INFO: Pod "pod-projected-configmaps-e33cd90f-f2a8-43d7-b80d-7a0f770a73e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291774059s
Jul  4 09:14:07.595: INFO: Pod "pod-projected-configmaps-e33cd90f-f2a8-43d7-b80d-7a0f770a73e6": Phase="Running", Reason="", readiness=true. Elapsed: 6.295131123s
Jul  4 09:14:09.599: INFO: Pod "pod-projected-configmaps-e33cd90f-f2a8-43d7-b80d-7a0f770a73e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.299258294s
STEP: Saw pod success
Jul  4 09:14:09.599: INFO: Pod "pod-projected-configmaps-e33cd90f-f2a8-43d7-b80d-7a0f770a73e6" satisfied condition "success or failure"
Jul  4 09:14:09.602: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-e33cd90f-f2a8-43d7-b80d-7a0f770a73e6 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  4 09:14:09.628: INFO: Waiting for pod pod-projected-configmaps-e33cd90f-f2a8-43d7-b80d-7a0f770a73e6 to disappear
Jul  4 09:14:09.631: INFO: Pod pod-projected-configmaps-e33cd90f-f2a8-43d7-b80d-7a0f770a73e6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:14:09.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7502" for this suite.

• [SLOW TEST:8.490 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2497,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:14:09.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-b72316e3-4a9b-472f-ae7d-46a9bad159c8
STEP: Creating a pod to test consume configMaps
Jul  4 09:14:09.748: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d3e38066-220b-4a21-a5ff-b727ea11f5b6" in namespace "projected-6455" to be "success or failure"
Jul  4 09:14:09.763: INFO: Pod "pod-projected-configmaps-d3e38066-220b-4a21-a5ff-b727ea11f5b6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.966532ms
Jul  4 09:14:11.767: INFO: Pod "pod-projected-configmaps-d3e38066-220b-4a21-a5ff-b727ea11f5b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018984226s
Jul  4 09:14:13.772: INFO: Pod "pod-projected-configmaps-d3e38066-220b-4a21-a5ff-b727ea11f5b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023519299s
STEP: Saw pod success
Jul  4 09:14:13.772: INFO: Pod "pod-projected-configmaps-d3e38066-220b-4a21-a5ff-b727ea11f5b6" satisfied condition "success or failure"
Jul  4 09:14:13.775: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-d3e38066-220b-4a21-a5ff-b727ea11f5b6 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  4 09:14:13.801: INFO: Waiting for pod pod-projected-configmaps-d3e38066-220b-4a21-a5ff-b727ea11f5b6 to disappear
Jul  4 09:14:13.844: INFO: Pod pod-projected-configmaps-d3e38066-220b-4a21-a5ff-b727ea11f5b6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:14:13.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6455" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2516,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:14:13.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul  4 09:14:13.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3275'
Jul  4 09:14:14.068: INFO: stderr: ""
Jul  4 09:14:14.068: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759
Jul  4 09:14:14.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3275'
Jul  4 09:14:17.730: INFO: stderr: ""
Jul  4 09:14:17.730: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:14:17.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3275" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":149,"skipped":2522,"failed":0}

------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:14:17.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jul  4 09:14:17.868: INFO: Created pod &Pod{ObjectMeta:{dns-2298  dns-2298 /api/v1/namespaces/dns-2298/pods/dns-2298 6c7255f0-99be-418b-9287-20b1b5f9acbd 21436 0 2020-07-04 09:14:17 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8cwwl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8cwwl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8cwwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Jul  4 09:14:21.882: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2298 PodName:dns-2298 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 09:14:21.882: INFO: >>> kubeConfig: /root/.kube/config
I0704 09:14:21.915089       6 log.go:172] (0xc002b808f0) (0xc001470780) Create stream
I0704 09:14:21.915122       6 log.go:172] (0xc002b808f0) (0xc001470780) Stream added, broadcasting: 1
I0704 09:14:21.916803       6 log.go:172] (0xc002b808f0) Reply frame received for 1
I0704 09:14:21.916856       6 log.go:172] (0xc002b808f0) (0xc001d8e000) Create stream
I0704 09:14:21.916890       6 log.go:172] (0xc002b808f0) (0xc001d8e000) Stream added, broadcasting: 3
I0704 09:14:21.918180       6 log.go:172] (0xc002b808f0) Reply frame received for 3
I0704 09:14:21.918300       6 log.go:172] (0xc002b808f0) (0xc001a96000) Create stream
I0704 09:14:21.918324       6 log.go:172] (0xc002b808f0) (0xc001a96000) Stream added, broadcasting: 5
I0704 09:14:21.919341       6 log.go:172] (0xc002b808f0) Reply frame received for 5
I0704 09:14:21.996660       6 log.go:172] (0xc002b808f0) Data frame received for 3
I0704 09:14:21.996687       6 log.go:172] (0xc001d8e000) (3) Data frame handling
I0704 09:14:21.996709       6 log.go:172] (0xc001d8e000) (3) Data frame sent
I0704 09:14:21.997342       6 log.go:172] (0xc002b808f0) Data frame received for 3
I0704 09:14:21.997383       6 log.go:172] (0xc001d8e000) (3) Data frame handling
I0704 09:14:21.997726       6 log.go:172] (0xc002b808f0) Data frame received for 5
I0704 09:14:21.997767       6 log.go:172] (0xc001a96000) (5) Data frame handling
I0704 09:14:21.998993       6 log.go:172] (0xc002b808f0) Data frame received for 1
I0704 09:14:21.999016       6 log.go:172] (0xc001470780) (1) Data frame handling
I0704 09:14:21.999029       6 log.go:172] (0xc001470780) (1) Data frame sent
I0704 09:14:21.999049       6 log.go:172] (0xc002b808f0) (0xc001470780) Stream removed, broadcasting: 1
I0704 09:14:21.999079       6 log.go:172] (0xc002b808f0) Go away received
I0704 09:14:21.999254       6 log.go:172] (0xc002b808f0) (0xc001470780) Stream removed, broadcasting: 1
I0704 09:14:21.999284       6 log.go:172] (0xc002b808f0) (0xc001d8e000) Stream removed, broadcasting: 3
I0704 09:14:21.999299       6 log.go:172] (0xc002b808f0) (0xc001a96000) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Jul  4 09:14:21.999: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2298 PodName:dns-2298 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 09:14:21.999: INFO: >>> kubeConfig: /root/.kube/config
I0704 09:14:22.028889       6 log.go:172] (0xc006d37a20) (0xc001f0ea00) Create stream
I0704 09:14:22.028918       6 log.go:172] (0xc006d37a20) (0xc001f0ea00) Stream added, broadcasting: 1
I0704 09:14:22.031477       6 log.go:172] (0xc006d37a20) Reply frame received for 1
I0704 09:14:22.031506       6 log.go:172] (0xc006d37a20) (0xc001a960a0) Create stream
I0704 09:14:22.031518       6 log.go:172] (0xc006d37a20) (0xc001a960a0) Stream added, broadcasting: 3
I0704 09:14:22.032561       6 log.go:172] (0xc006d37a20) Reply frame received for 3
I0704 09:14:22.032613       6 log.go:172] (0xc006d37a20) (0xc001470820) Create stream
I0704 09:14:22.032626       6 log.go:172] (0xc006d37a20) (0xc001470820) Stream added, broadcasting: 5
I0704 09:14:22.033679       6 log.go:172] (0xc006d37a20) Reply frame received for 5
I0704 09:14:22.119948       6 log.go:172] (0xc006d37a20) Data frame received for 3
I0704 09:14:22.119970       6 log.go:172] (0xc001a960a0) (3) Data frame handling
I0704 09:14:22.119977       6 log.go:172] (0xc001a960a0) (3) Data frame sent
I0704 09:14:22.120960       6 log.go:172] (0xc006d37a20) Data frame received for 3
I0704 09:14:22.121000       6 log.go:172] (0xc001a960a0) (3) Data frame handling
I0704 09:14:22.121034       6 log.go:172] (0xc006d37a20) Data frame received for 5
I0704 09:14:22.121056       6 log.go:172] (0xc001470820) (5) Data frame handling
I0704 09:14:22.122525       6 log.go:172] (0xc006d37a20) Data frame received for 1
I0704 09:14:22.122544       6 log.go:172] (0xc001f0ea00) (1) Data frame handling
I0704 09:14:22.122550       6 log.go:172] (0xc001f0ea00) (1) Data frame sent
I0704 09:14:22.122571       6 log.go:172] (0xc006d37a20) (0xc001f0ea00) Stream removed, broadcasting: 1
I0704 09:14:22.122649       6 log.go:172] (0xc006d37a20) (0xc001f0ea00) Stream removed, broadcasting: 1
I0704 09:14:22.122663       6 log.go:172] (0xc006d37a20) (0xc001a960a0) Stream removed, broadcasting: 3
I0704 09:14:22.122792       6 log.go:172] (0xc006d37a20) Go away received
I0704 09:14:22.122858       6 log.go:172] (0xc006d37a20) (0xc001470820) Stream removed, broadcasting: 5
Jul  4 09:14:22.122: INFO: Deleting pod dns-2298...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:14:22.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2298" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":150,"skipped":2522,"failed":0}
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:14:22.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul  4 09:14:31.790: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  4 09:14:31.794: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  4 09:14:33.794: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  4 09:14:33.798: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  4 09:14:35.794: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  4 09:14:35.799: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  4 09:14:37.794: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  4 09:14:37.798: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:14:37.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6177" for this suite.

• [SLOW TEST:15.546 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2523,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:14:37.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5492.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5492.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5492.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5492.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5492.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5492.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  4 09:14:44.044: INFO: DNS probes using dns-5492/dns-test-3d556ad2-b376-479d-850c-8bf1eb818f2f succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:14:44.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5492" for this suite.

• [SLOW TEST:6.374 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":152,"skipped":2529,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:14:44.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  4 09:14:45.011: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  4 09:14:47.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450885, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450885, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450885, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450885, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  4 09:14:50.052: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:14:50.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1507" for this suite.
STEP: Destroying namespace "webhook-1507-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.436 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":153,"skipped":2543,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:14:50.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul  4 09:14:50.772: INFO: Waiting up to 5m0s for pod "pod-2e934fc4-d7e7-479c-804b-d822c963fdd2" in namespace "emptydir-8179" to be "success or failure"
Jul  4 09:14:50.784: INFO: Pod "pod-2e934fc4-d7e7-479c-804b-d822c963fdd2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.985409ms
Jul  4 09:14:52.814: INFO: Pod "pod-2e934fc4-d7e7-479c-804b-d822c963fdd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042196501s
Jul  4 09:14:54.819: INFO: Pod "pod-2e934fc4-d7e7-479c-804b-d822c963fdd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046485524s
STEP: Saw pod success
Jul  4 09:14:54.819: INFO: Pod "pod-2e934fc4-d7e7-479c-804b-d822c963fdd2" satisfied condition "success or failure"
Jul  4 09:14:54.822: INFO: Trying to get logs from node jerma-worker pod pod-2e934fc4-d7e7-479c-804b-d822c963fdd2 container test-container: 
STEP: delete the pod
Jul  4 09:14:54.869: INFO: Waiting for pod pod-2e934fc4-d7e7-479c-804b-d822c963fdd2 to disappear
Jul  4 09:14:54.884: INFO: Pod pod-2e934fc4-d7e7-479c-804b-d822c963fdd2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:14:54.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8179" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2556,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:14:54.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Jul  4 09:14:54.953: INFO: Waiting up to 5m0s for pod "client-containers-63f0f8b1-ec6b-48a6-a85d-52784e168466" in namespace "containers-4847" to be "success or failure"
Jul  4 09:14:54.970: INFO: Pod "client-containers-63f0f8b1-ec6b-48a6-a85d-52784e168466": Phase="Pending", Reason="", readiness=false. Elapsed: 16.18994ms
Jul  4 09:14:56.974: INFO: Pod "client-containers-63f0f8b1-ec6b-48a6-a85d-52784e168466": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020096041s
Jul  4 09:14:58.977: INFO: Pod "client-containers-63f0f8b1-ec6b-48a6-a85d-52784e168466": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02376864s
STEP: Saw pod success
Jul  4 09:14:58.977: INFO: Pod "client-containers-63f0f8b1-ec6b-48a6-a85d-52784e168466" satisfied condition "success or failure"
Jul  4 09:14:58.980: INFO: Trying to get logs from node jerma-worker2 pod client-containers-63f0f8b1-ec6b-48a6-a85d-52784e168466 container test-container: 
STEP: delete the pod
Jul  4 09:14:58.999: INFO: Waiting for pod client-containers-63f0f8b1-ec6b-48a6-a85d-52784e168466 to disappear
Jul  4 09:14:59.004: INFO: Pod client-containers-63f0f8b1-ec6b-48a6-a85d-52784e168466 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:14:59.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4847" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2627,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:14:59.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:14:59.090: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jul  4 09:14:59.253: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:15:07.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8572" for this suite.

• [SLOW TEST:7.997 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":157,"skipped":2712,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:15:07.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  4 09:15:07.295: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e300f128-bcba-42e2-8ae6-67abe3a1cff3" in namespace "projected-206" to be "success or failure"
Jul  4 09:15:07.316: INFO: Pod "downwardapi-volume-e300f128-bcba-42e2-8ae6-67abe3a1cff3": Phase="Pending", Reason="", readiness=false. Elapsed: 21.241919ms
Jul  4 09:15:09.321: INFO: Pod "downwardapi-volume-e300f128-bcba-42e2-8ae6-67abe3a1cff3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025483702s
Jul  4 09:15:11.324: INFO: Pod "downwardapi-volume-e300f128-bcba-42e2-8ae6-67abe3a1cff3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02940518s
STEP: Saw pod success
Jul  4 09:15:11.324: INFO: Pod "downwardapi-volume-e300f128-bcba-42e2-8ae6-67abe3a1cff3" satisfied condition "success or failure"
Jul  4 09:15:11.327: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-e300f128-bcba-42e2-8ae6-67abe3a1cff3 container client-container: 
STEP: delete the pod
Jul  4 09:15:11.389: INFO: Waiting for pod downwardapi-volume-e300f128-bcba-42e2-8ae6-67abe3a1cff3 to disappear
Jul  4 09:15:11.391: INFO: Pod downwardapi-volume-e300f128-bcba-42e2-8ae6-67abe3a1cff3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:15:11.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-206" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2716,"failed":0}
SSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:15:11.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jul  4 09:15:15.714: INFO: &Pod{ObjectMeta:{send-events-55219278-0f7a-4504-9c4b-480bfe198949  events-2423 /api/v1/namespaces/events-2423/pods/send-events-55219278-0f7a-4504-9c4b-480bfe198949 c89e3885-19fb-4f13-a3f2-5d2f02ff6f38 21925 0 2020-07-04 09:15:11 +0000 UTC   map[name:foo time:696099242] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8tkmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8tkmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8tkmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:15:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:15:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:15:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:15:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.114,StartTime:2020-07-04 09:15:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-04 09:15:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://4e1d6d57f37125d54121054c3cdcc6cf4b24ce77d8215fb5dce2fe7b1cd118ff,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.114,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Jul  4 09:15:17.719: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jul  4 09:15:19.724: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:15:19.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2423" for this suite.

• [SLOW TEST:8.359 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":159,"skipped":2719,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:15:19.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:15:23.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6433" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2746,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:15:23.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jul  4 09:15:23.969: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Jul  4 09:15:24.448: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jul  4 09:15:26.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:15:28.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:15:30.683: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:15:32.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:15:34.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:15:36.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:15:38.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:15:40.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:15:42.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450924, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:15:45.518: INFO: Waited 925.100751ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:15:46.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-5733" for this suite.

• [SLOW TEST:22.522 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":161,"skipped":2765,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:15:46.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  4 09:15:47.132: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  4 09:15:49.142: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450947, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450947, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450947, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450947, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:15:51.186: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450947, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450947, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450947, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450947, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:15:53.146: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450947, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450947, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450947, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729450947, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  4 09:15:56.223: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:15:56.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8046" for this suite.
STEP: Destroying namespace "webhook-8046-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.082 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":162,"skipped":2775,"failed":0}
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:15:56.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test env composition
Jul  4 09:15:56.543: INFO: Waiting up to 5m0s for pod "var-expansion-7f14d64f-12c8-4e01-b6b5-9a478f405643" in namespace "var-expansion-4356" to be "success or failure"
Jul  4 09:15:56.546: INFO: Pod "var-expansion-7f14d64f-12c8-4e01-b6b5-9a478f405643": Phase="Pending", Reason="", readiness=false. Elapsed: 3.43465ms
Jul  4 09:15:58.550: INFO: Pod "var-expansion-7f14d64f-12c8-4e01-b6b5-9a478f405643": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007564664s
Jul  4 09:16:00.555: INFO: Pod "var-expansion-7f14d64f-12c8-4e01-b6b5-9a478f405643": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012299182s
STEP: Saw pod success
Jul  4 09:16:00.555: INFO: Pod "var-expansion-7f14d64f-12c8-4e01-b6b5-9a478f405643" satisfied condition "success or failure"
Jul  4 09:16:00.558: INFO: Trying to get logs from node jerma-worker pod var-expansion-7f14d64f-12c8-4e01-b6b5-9a478f405643 container dapi-container: 
STEP: delete the pod
Jul  4 09:16:00.578: INFO: Waiting for pod var-expansion-7f14d64f-12c8-4e01-b6b5-9a478f405643 to disappear
Jul  4 09:16:00.598: INFO: Pod var-expansion-7f14d64f-12c8-4e01-b6b5-9a478f405643 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:16:00.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4356" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2780,"failed":0}
SSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:16:00.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Jul  4 09:16:00.676: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9035" to be "success or failure"
Jul  4 09:16:00.691: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.945467ms
Jul  4 09:16:02.695: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019303251s
Jul  4 09:16:04.700: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023529319s
Jul  4 09:16:06.704: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027532658s
STEP: Saw pod success
Jul  4 09:16:06.704: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jul  4 09:16:06.706: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jul  4 09:16:06.721: INFO: Waiting for pod pod-host-path-test to disappear
Jul  4 09:16:06.767: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:16:06.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-9035" for this suite.

• [SLOW TEST:6.168 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2785,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:16:06.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:16:23.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4424" for this suite.

• [SLOW TEST:17.122 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":165,"skipped":2833,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:16:23.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-3198/secret-test-06e427e2-bcc3-4031-8bc2-f397cd30993e
STEP: Creating a pod to test consume secrets
Jul  4 09:16:24.076: INFO: Waiting up to 5m0s for pod "pod-configmaps-e3a651dc-5ad6-481d-b805-d2879664fcc6" in namespace "secrets-3198" to be "success or failure"
Jul  4 09:16:24.080: INFO: Pod "pod-configmaps-e3a651dc-5ad6-481d-b805-d2879664fcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.445356ms
Jul  4 09:16:26.187: INFO: Pod "pod-configmaps-e3a651dc-5ad6-481d-b805-d2879664fcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110283034s
Jul  4 09:16:28.191: INFO: Pod "pod-configmaps-e3a651dc-5ad6-481d-b805-d2879664fcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114224881s
Jul  4 09:16:30.194: INFO: Pod "pod-configmaps-e3a651dc-5ad6-481d-b805-d2879664fcc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.11792226s
STEP: Saw pod success
Jul  4 09:16:30.194: INFO: Pod "pod-configmaps-e3a651dc-5ad6-481d-b805-d2879664fcc6" satisfied condition "success or failure"
Jul  4 09:16:30.197: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-e3a651dc-5ad6-481d-b805-d2879664fcc6 container env-test: 
STEP: delete the pod
Jul  4 09:16:30.278: INFO: Waiting for pod pod-configmaps-e3a651dc-5ad6-481d-b805-d2879664fcc6 to disappear
Jul  4 09:16:30.284: INFO: Pod pod-configmaps-e3a651dc-5ad6-481d-b805-d2879664fcc6 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:16:30.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3198" for this suite.

• [SLOW TEST:6.397 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2847,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:16:30.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul  4 09:16:30.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7838'
Jul  4 09:16:31.050: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  4 09:16:31.050: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Jul  4 09:16:31.058: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jul  4 09:16:31.086: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jul  4 09:16:31.172: INFO: scanned /root for discovery docs: 
Jul  4 09:16:31.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7838'
Jul  4 09:16:47.096: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jul  4 09:16:47.096: INFO: stdout: "Created e2e-test-httpd-rc-f2602127ab4310313b097114ee49d578\nScaling up e2e-test-httpd-rc-f2602127ab4310313b097114ee49d578 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-f2602127ab4310313b097114ee49d578 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-f2602127ab4310313b097114ee49d578 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Jul  4 09:16:47.096: INFO: stdout: "Created e2e-test-httpd-rc-f2602127ab4310313b097114ee49d578\nScaling up e2e-test-httpd-rc-f2602127ab4310313b097114ee49d578 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-f2602127ab4310313b097114ee49d578 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-f2602127ab4310313b097114ee49d578 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Jul  4 09:16:47.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7838'
Jul  4 09:16:47.191: INFO: stderr: ""
Jul  4 09:16:47.191: INFO: stdout: "e2e-test-httpd-rc-f2602127ab4310313b097114ee49d578-drtmf "
Jul  4 09:16:47.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-f2602127ab4310313b097114ee49d578-drtmf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7838'
Jul  4 09:16:47.267: INFO: stderr: ""
Jul  4 09:16:47.267: INFO: stdout: "true"
Jul  4 09:16:47.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-f2602127ab4310313b097114ee49d578-drtmf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7838'
Jul  4 09:16:47.349: INFO: stderr: ""
Jul  4 09:16:47.349: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Jul  4 09:16:47.349: INFO: e2e-test-httpd-rc-f2602127ab4310313b097114ee49d578-drtmf is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591
Jul  4 09:16:47.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7838'
Jul  4 09:16:47.454: INFO: stderr: ""
Jul  4 09:16:47.454: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:16:47.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7838" for this suite.

• [SLOW TEST:17.206 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":278,"completed":167,"skipped":2863,"failed":0}
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:16:47.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-fd7feabb-6fab-41ad-8c37-bb9ab73890ab
Jul  4 09:16:47.644: INFO: Pod name my-hostname-basic-fd7feabb-6fab-41ad-8c37-bb9ab73890ab: Found 0 pods out of 1
Jul  4 09:16:52.653: INFO: Pod name my-hostname-basic-fd7feabb-6fab-41ad-8c37-bb9ab73890ab: Found 1 pods out of 1
Jul  4 09:16:52.653: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-fd7feabb-6fab-41ad-8c37-bb9ab73890ab" are running
Jul  4 09:16:52.668: INFO: Pod "my-hostname-basic-fd7feabb-6fab-41ad-8c37-bb9ab73890ab-b4wx2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-04 09:16:48 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-04 09:16:51 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-04 09:16:51 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-04 09:16:47 +0000 UTC Reason: Message:}])
Jul  4 09:16:52.668: INFO: Trying to dial the pod
Jul  4 09:16:57.675: INFO: Controller my-hostname-basic-fd7feabb-6fab-41ad-8c37-bb9ab73890ab: Got expected result from replica 1 [my-hostname-basic-fd7feabb-6fab-41ad-8c37-bb9ab73890ab-b4wx2]: "my-hostname-basic-fd7feabb-6fab-41ad-8c37-bb9ab73890ab-b4wx2", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:16:57.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3587" for this suite.

• [SLOW TEST:10.179 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":168,"skipped":2869,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:16:57.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:16:59.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-8618" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":169,"skipped":2889,"failed":0}
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:16:59.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-3492
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-3492
Jul  4 09:17:02.118: INFO: Found 0 stateful pods, waiting for 1
Jul  4 09:17:12.283: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jul  4 09:17:12.547: INFO: Deleting all statefulset in ns statefulset-3492
Jul  4 09:17:13.337: INFO: Scaling statefulset ss to 0
Jul  4 09:17:45.906: INFO: Waiting for statefulset status.replicas updated to 0
Jul  4 09:17:45.909: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:17:45.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3492" for this suite.

• [SLOW TEST:46.029 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":170,"skipped":2891,"failed":0}
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:17:45.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  4 09:17:46.110: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0e93a3b0-077b-49f9-9603-69b979a46ad5" in namespace "projected-6838" to be "success or failure"
Jul  4 09:17:46.130: INFO: Pod "downwardapi-volume-0e93a3b0-077b-49f9-9603-69b979a46ad5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.016743ms
Jul  4 09:17:48.439: INFO: Pod "downwardapi-volume-0e93a3b0-077b-49f9-9603-69b979a46ad5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328713822s
Jul  4 09:17:50.443: INFO: Pod "downwardapi-volume-0e93a3b0-077b-49f9-9603-69b979a46ad5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332758248s
Jul  4 09:17:52.494: INFO: Pod "downwardapi-volume-0e93a3b0-077b-49f9-9603-69b979a46ad5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.384055683s
STEP: Saw pod success
Jul  4 09:17:52.494: INFO: Pod "downwardapi-volume-0e93a3b0-077b-49f9-9603-69b979a46ad5" satisfied condition "success or failure"
Jul  4 09:17:52.497: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0e93a3b0-077b-49f9-9603-69b979a46ad5 container client-container: 
STEP: delete the pod
Jul  4 09:17:52.695: INFO: Waiting for pod downwardapi-volume-0e93a3b0-077b-49f9-9603-69b979a46ad5 to disappear
Jul  4 09:17:52.752: INFO: Pod downwardapi-volume-0e93a3b0-077b-49f9-9603-69b979a46ad5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:17:52.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6838" for this suite.

• [SLOW TEST:6.938 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2891,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:17:52.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:17:52.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jul  4 09:17:53.165: INFO: stderr: ""
Jul  4 09:17:53.165: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.8\", GitCommit:\"35dc4cdc26cfcb6614059c4c6e836e5f0dc61dee\", GitTreeState:\"clean\", BuildDate:\"2020-07-03T19:01:23Z\", GoVersion:\"go1.13.11\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.5\", GitCommit:\"e0fccafd69541e3750d460ba0f9743b90336f24f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:11:15Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:17:53.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2110" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":172,"skipped":2894,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:17:53.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-6ffd3b6d-3907-4a14-aee5-55bb89717e49
STEP: Creating a pod to test consume secrets
Jul  4 09:17:53.335: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-502add4b-d472-4504-adda-e1aad86b5f17" in namespace "projected-940" to be "success or failure"
Jul  4 09:17:53.463: INFO: Pod "pod-projected-secrets-502add4b-d472-4504-adda-e1aad86b5f17": Phase="Pending", Reason="", readiness=false. Elapsed: 127.625311ms
Jul  4 09:17:55.475: INFO: Pod "pod-projected-secrets-502add4b-d472-4504-adda-e1aad86b5f17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13943445s
Jul  4 09:17:57.479: INFO: Pod "pod-projected-secrets-502add4b-d472-4504-adda-e1aad86b5f17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143482653s
Jul  4 09:17:59.483: INFO: Pod "pod-projected-secrets-502add4b-d472-4504-adda-e1aad86b5f17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.147815248s
STEP: Saw pod success
Jul  4 09:17:59.483: INFO: Pod "pod-projected-secrets-502add4b-d472-4504-adda-e1aad86b5f17" satisfied condition "success or failure"
Jul  4 09:17:59.486: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-502add4b-d472-4504-adda-e1aad86b5f17 container projected-secret-volume-test: 
STEP: delete the pod
Jul  4 09:17:59.543: INFO: Waiting for pod pod-projected-secrets-502add4b-d472-4504-adda-e1aad86b5f17 to disappear
Jul  4 09:17:59.548: INFO: Pod pod-projected-secrets-502add4b-d472-4504-adda-e1aad86b5f17 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:17:59.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-940" for this suite.

• [SLOW TEST:6.381 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2919,"failed":0}
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:17:59.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-383b0e0e-0ae2-420a-a5c7-0d61eac5eea8
STEP: Creating a pod to test consume secrets
Jul  4 09:17:59.663: INFO: Waiting up to 5m0s for pod "pod-secrets-d97e789a-818f-4fca-8299-7656c0ccd0ae" in namespace "secrets-4636" to be "success or failure"
Jul  4 09:17:59.697: INFO: Pod "pod-secrets-d97e789a-818f-4fca-8299-7656c0ccd0ae": Phase="Pending", Reason="", readiness=false. Elapsed: 34.628786ms
Jul  4 09:18:01.706: INFO: Pod "pod-secrets-d97e789a-818f-4fca-8299-7656c0ccd0ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04297202s
Jul  4 09:18:03.709: INFO: Pod "pod-secrets-d97e789a-818f-4fca-8299-7656c0ccd0ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045754582s
STEP: Saw pod success
Jul  4 09:18:03.709: INFO: Pod "pod-secrets-d97e789a-818f-4fca-8299-7656c0ccd0ae" satisfied condition "success or failure"
Jul  4 09:18:03.718: INFO: Trying to get logs from node jerma-worker pod pod-secrets-d97e789a-818f-4fca-8299-7656c0ccd0ae container secret-volume-test: 
STEP: delete the pod
Jul  4 09:18:03.793: INFO: Waiting for pod pod-secrets-d97e789a-818f-4fca-8299-7656c0ccd0ae to disappear
Jul  4 09:18:03.801: INFO: Pod pod-secrets-d97e789a-818f-4fca-8299-7656c0ccd0ae no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:18:03.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4636" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2921,"failed":0}
SSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:18:03.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-h4b95 in namespace proxy-6528
I0704 09:18:03.936453       6 runners.go:189] Created replication controller with name: proxy-service-h4b95, namespace: proxy-6528, replica count: 1
I0704 09:18:04.987114       6 runners.go:189] proxy-service-h4b95 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0704 09:18:05.987307       6 runners.go:189] proxy-service-h4b95 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0704 09:18:06.987510       6 runners.go:189] proxy-service-h4b95 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0704 09:18:07.987712       6 runners.go:189] proxy-service-h4b95 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0704 09:18:08.987962       6 runners.go:189] proxy-service-h4b95 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0704 09:18:09.988279       6 runners.go:189] proxy-service-h4b95 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  4 09:18:09.992: INFO: setup took 6.115939519s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jul  4 09:18:10.004: INFO: (0) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:1080/proxy/: test<... (200; 12.247146ms)
Jul  4 09:18:10.004: INFO: (0) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf/proxy/: test (200; 12.429791ms)
Jul  4 09:18:10.004: INFO: (0) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 12.253063ms)
Jul  4 09:18:10.004: INFO: (0) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname1/proxy/: foo (200; 12.44624ms)
Jul  4 09:18:10.005: INFO: (0) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname1/proxy/: foo (200; 13.029491ms)
Jul  4 09:18:10.006: INFO: (0) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 13.510247ms)
Jul  4 09:18:10.006: INFO: (0) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 13.591173ms)
Jul  4 09:18:10.006: INFO: (0) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname2/proxy/: bar (200; 14.076739ms)
Jul  4 09:18:10.006: INFO: (0) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:1080/proxy/: ... (200; 14.241075ms)
Jul  4 09:18:10.008: INFO: (0) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:460/proxy/: tls baz (200; 15.551294ms)
Jul  4 09:18:10.008: INFO: (0) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname1/proxy/: tls baz (200; 15.399642ms)
Jul  4 09:18:10.010: INFO: (0) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 17.817986ms)
Jul  4 09:18:10.010: INFO: (0) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname2/proxy/: bar (200; 17.765767ms)
Jul  4 09:18:10.011: INFO: (0) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:462/proxy/: tls qux (200; 18.756063ms)
Jul  4 09:18:10.011: INFO: (0) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname2/proxy/: tls qux (200; 18.837263ms)
Jul  4 09:18:10.014: INFO: (0) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:443/proxy/: test (200; 5.6433ms)
Jul  4 09:18:10.020: INFO: (1) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:1080/proxy/: ... (200; 5.570577ms)
Jul  4 09:18:10.020: INFO: (1) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 5.727573ms)
Jul  4 09:18:10.020: INFO: (1) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:462/proxy/: tls qux (200; 5.774697ms)
Jul  4 09:18:10.020: INFO: (1) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 5.641265ms)
Jul  4 09:18:10.020: INFO: (1) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 5.754927ms)
Jul  4 09:18:10.020: INFO: (1) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname1/proxy/: foo (200; 5.720415ms)
Jul  4 09:18:10.020: INFO: (1) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:460/proxy/: tls baz (200; 5.722318ms)
Jul  4 09:18:10.020: INFO: (1) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname1/proxy/: tls baz (200; 5.963586ms)
Jul  4 09:18:10.020: INFO: (1) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname1/proxy/: foo (200; 5.993051ms)
Jul  4 09:18:10.020: INFO: (1) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname2/proxy/: bar (200; 5.941191ms)
Jul  4 09:18:10.020: INFO: (1) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:1080/proxy/: test<... (200; 6.23051ms)
Jul  4 09:18:10.020: INFO: (1) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname2/proxy/: tls qux (200; 6.377141ms)
Jul  4 09:18:10.024: INFO: (2) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:460/proxy/: tls baz (200; 3.731753ms)
Jul  4 09:18:10.024: INFO: (2) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:1080/proxy/: ... (200; 3.797254ms)
Jul  4 09:18:10.024: INFO: (2) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf/proxy/: test (200; 3.665144ms)
Jul  4 09:18:10.024: INFO: (2) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:462/proxy/: tls qux (200; 3.872792ms)
Jul  4 09:18:10.025: INFO: (2) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 4.753827ms)
Jul  4 09:18:10.026: INFO: (2) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 5.35539ms)
Jul  4 09:18:10.027: INFO: (2) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 6.689205ms)
Jul  4 09:18:10.027: INFO: (2) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 6.758135ms)
Jul  4 09:18:10.027: INFO: (2) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:1080/proxy/: test<... (200; 6.679746ms)
Jul  4 09:18:10.027: INFO: (2) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:443/proxy/: ... (200; 4.147692ms)
Jul  4 09:18:10.033: INFO: (3) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname1/proxy/: foo (200; 4.778675ms)
Jul  4 09:18:10.034: INFO: (3) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname2/proxy/: bar (200; 5.156811ms)
Jul  4 09:18:10.034: INFO: (3) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 5.156428ms)
Jul  4 09:18:10.034: INFO: (3) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname2/proxy/: bar (200; 5.289754ms)
Jul  4 09:18:10.034: INFO: (3) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf/proxy/: test (200; 5.23596ms)
Jul  4 09:18:10.034: INFO: (3) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname1/proxy/: foo (200; 5.19471ms)
Jul  4 09:18:10.034: INFO: (3) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:443/proxy/: test<... (200; 5.36539ms)
Jul  4 09:18:10.034: INFO: (3) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname2/proxy/: tls qux (200; 5.786668ms)
Jul  4 09:18:10.038: INFO: (4) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:1080/proxy/: ... (200; 3.497501ms)
Jul  4 09:18:10.039: INFO: (4) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:1080/proxy/: test<... (200; 4.494049ms)
Jul  4 09:18:10.039: INFO: (4) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 4.497368ms)
Jul  4 09:18:10.039: INFO: (4) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 4.531879ms)
Jul  4 09:18:10.039: INFO: (4) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:460/proxy/: tls baz (200; 4.612777ms)
Jul  4 09:18:10.039: INFO: (4) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 4.539634ms)
Jul  4 09:18:10.039: INFO: (4) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf/proxy/: test (200; 4.576448ms)
Jul  4 09:18:10.039: INFO: (4) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 4.525257ms)
Jul  4 09:18:10.039: INFO: (4) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname2/proxy/: tls qux (200; 4.809527ms)
Jul  4 09:18:10.039: INFO: (4) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:462/proxy/: tls qux (200; 4.770685ms)
Jul  4 09:18:10.039: INFO: (4) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname1/proxy/: foo (200; 5.059777ms)
Jul  4 09:18:10.039: INFO: (4) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname2/proxy/: bar (200; 5.046512ms)
Jul  4 09:18:10.039: INFO: (4) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname2/proxy/: bar (200; 5.063312ms)
Jul  4 09:18:10.039: INFO: (4) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname1/proxy/: foo (200; 5.106401ms)
Jul  4 09:18:10.039: INFO: (4) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:443/proxy/: test<... (200; 2.324893ms)
Jul  4 09:18:10.044: INFO: (5) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf/proxy/: test (200; 4.416478ms)
Jul  4 09:18:10.044: INFO: (5) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:443/proxy/: ... (200; 5.084236ms)
Jul  4 09:18:10.045: INFO: (5) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname2/proxy/: bar (200; 5.490627ms)
Jul  4 09:18:10.045: INFO: (5) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname2/proxy/: bar (200; 5.626667ms)
Jul  4 09:18:10.045: INFO: (5) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname1/proxy/: foo (200; 5.793057ms)
Jul  4 09:18:10.045: INFO: (5) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname2/proxy/: tls qux (200; 5.783613ms)
Jul  4 09:18:10.045: INFO: (5) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname1/proxy/: tls baz (200; 5.86967ms)
Jul  4 09:18:10.045: INFO: (5) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname1/proxy/: foo (200; 5.829999ms)
Jul  4 09:18:10.048: INFO: (6) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:443/proxy/: ... (200; 3.465701ms)
Jul  4 09:18:10.049: INFO: (6) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 3.472ms)
Jul  4 09:18:10.050: INFO: (6) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname2/proxy/: bar (200; 4.532807ms)
Jul  4 09:18:10.050: INFO: (6) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname1/proxy/: foo (200; 4.601622ms)
Jul  4 09:18:10.050: INFO: (6) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:462/proxy/: tls qux (200; 4.786466ms)
Jul  4 09:18:10.050: INFO: (6) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:1080/proxy/: test<... (200; 4.842902ms)
Jul  4 09:18:10.050: INFO: (6) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname1/proxy/: tls baz (200; 4.864662ms)
Jul  4 09:18:10.050: INFO: (6) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname2/proxy/: bar (200; 4.870672ms)
Jul  4 09:18:10.051: INFO: (6) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname1/proxy/: foo (200; 5.065846ms)
Jul  4 09:18:10.051: INFO: (6) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:460/proxy/: tls baz (200; 5.548587ms)
Jul  4 09:18:10.051: INFO: (6) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 5.49046ms)
Jul  4 09:18:10.051: INFO: (6) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 5.508259ms)
Jul  4 09:18:10.051: INFO: (6) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 5.622061ms)
Jul  4 09:18:10.051: INFO: (6) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf/proxy/: test (200; 5.661653ms)
Jul  4 09:18:10.051: INFO: (6) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname2/proxy/: tls qux (200; 5.613965ms)
Jul  4 09:18:10.055: INFO: (7) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:1080/proxy/: ... (200; 3.564383ms)
Jul  4 09:18:10.055: INFO: (7) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 3.806183ms)
Jul  4 09:18:10.055: INFO: (7) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:462/proxy/: tls qux (200; 4.066117ms)
Jul  4 09:18:10.055: INFO: (7) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:1080/proxy/: test<... (200; 4.106452ms)
Jul  4 09:18:10.055: INFO: (7) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 4.271782ms)
Jul  4 09:18:10.055: INFO: (7) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 4.254593ms)
Jul  4 09:18:10.055: INFO: (7) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 4.259406ms)
Jul  4 09:18:10.055: INFO: (7) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf/proxy/: test (200; 4.295476ms)
Jul  4 09:18:10.056: INFO: (7) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:443/proxy/: ... (200; 2.553206ms)
Jul  4 09:18:10.060: INFO: (8) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 2.543989ms)
Jul  4 09:18:10.060: INFO: (8) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:460/proxy/: tls baz (200; 2.705168ms)
Jul  4 09:18:10.061: INFO: (8) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 3.706263ms)
Jul  4 09:18:10.061: INFO: (8) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:443/proxy/: test<... (200; 6.383986ms)
Jul  4 09:18:10.064: INFO: (8) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname1/proxy/: tls baz (200; 6.38614ms)
Jul  4 09:18:10.064: INFO: (8) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 6.485525ms)
Jul  4 09:18:10.064: INFO: (8) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:462/proxy/: tls qux (200; 6.416227ms)
Jul  4 09:18:10.064: INFO: (8) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname1/proxy/: foo (200; 6.47461ms)
Jul  4 09:18:10.064: INFO: (8) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname1/proxy/: foo (200; 6.479653ms)
Jul  4 09:18:10.064: INFO: (8) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname2/proxy/: tls qux (200; 6.579564ms)
Jul  4 09:18:10.064: INFO: (8) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf/proxy/: test (200; 6.563273ms)
Jul  4 09:18:10.064: INFO: (8) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname2/proxy/: bar (200; 6.55797ms)
Jul  4 09:18:10.068: INFO: (9) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:443/proxy/: ... (200; 5.885805ms)
Jul  4 09:18:10.070: INFO: (9) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 5.819451ms)
Jul  4 09:18:10.070: INFO: (9) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname2/proxy/: bar (200; 5.91591ms)
Jul  4 09:18:10.070: INFO: (9) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname1/proxy/: tls baz (200; 6.005484ms)
Jul  4 09:18:10.070: INFO: (9) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname2/proxy/: tls qux (200; 5.882569ms)
Jul  4 09:18:10.070: INFO: (9) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:460/proxy/: tls baz (200; 5.840281ms)
Jul  4 09:18:10.070: INFO: (9) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf/proxy/: test (200; 5.881766ms)
Jul  4 09:18:10.070: INFO: (9) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname2/proxy/: bar (200; 5.841991ms)
Jul  4 09:18:10.070: INFO: (9) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:462/proxy/: tls qux (200; 5.963303ms)
Jul  4 09:18:10.070: INFO: (9) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 5.930713ms)
Jul  4 09:18:10.070: INFO: (9) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:1080/proxy/: test<... (200; 5.888516ms)
Jul  4 09:18:10.070: INFO: (9) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 5.96724ms)
Jul  4 09:18:10.073: INFO: (10) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 2.782088ms)
Jul  4 09:18:10.073: INFO: (10) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:462/proxy/: tls qux (200; 2.797586ms)
Jul  4 09:18:10.073: INFO: (10) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:460/proxy/: tls baz (200; 2.805826ms)
Jul  4 09:18:10.075: INFO: (10) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:443/proxy/: ... (200; 4.455671ms)
Jul  4 09:18:10.075: INFO: (10) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 4.617345ms)
Jul  4 09:18:10.075: INFO: (10) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf/proxy/: test (200; 4.816348ms)
Jul  4 09:18:10.075: INFO: (10) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:1080/proxy/: test<... (200; 4.881852ms)
Jul  4 09:18:10.075: INFO: (10) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname1/proxy/: foo (200; 5.10609ms)
Jul  4 09:18:10.075: INFO: (10) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname1/proxy/: foo (200; 5.200422ms)
Jul  4 09:18:10.076: INFO: (10) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname2/proxy/: bar (200; 5.592203ms)
Jul  4 09:18:10.076: INFO: (10) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname2/proxy/: bar (200; 5.527552ms)
Jul  4 09:18:10.076: INFO: (10) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname2/proxy/: tls qux (200; 5.654637ms)
Jul  4 09:18:10.076: INFO: (10) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname1/proxy/: tls baz (200; 5.75572ms)
Jul  4 09:18:10.078: INFO: (11) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 2.270143ms)
Jul  4 09:18:10.080: INFO: (11) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:1080/proxy/: test<... (200; 3.559908ms)
Jul  4 09:18:10.080: INFO: (11) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 3.712728ms)
Jul  4 09:18:10.080: INFO: (11) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf/proxy/: test (200; 3.772508ms)
Jul  4 09:18:10.080: INFO: (11) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:460/proxy/: tls baz (200; 3.784448ms)
Jul  4 09:18:10.080: INFO: (11) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 3.723948ms)
Jul  4 09:18:10.080: INFO: (11) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname1/proxy/: foo (200; 3.782057ms)
Jul  4 09:18:10.080: INFO: (11) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 3.745973ms)
Jul  4 09:18:10.080: INFO: (11) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:462/proxy/: tls qux (200; 3.952583ms)
Jul  4 09:18:10.080: INFO: (11) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:1080/proxy/: ... (200; 4.16345ms)
Jul  4 09:18:10.080: INFO: (11) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:443/proxy/: test (200; 2.700329ms)
Jul  4 09:18:10.084: INFO: (12) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:462/proxy/: tls qux (200; 3.102647ms)
Jul  4 09:18:10.084: INFO: (12) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 3.037206ms)
Jul  4 09:18:10.084: INFO: (12) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 3.145785ms)
Jul  4 09:18:10.084: INFO: (12) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:460/proxy/: tls baz (200; 3.102751ms)
Jul  4 09:18:10.085: INFO: (12) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:1080/proxy/: ... (200; 3.424634ms)
Jul  4 09:18:10.085: INFO: (12) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:443/proxy/: test<... (200; 3.756349ms)
Jul  4 09:18:10.085: INFO: (12) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 3.806342ms)
Jul  4 09:18:10.086: INFO: (12) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname1/proxy/: foo (200; 4.754382ms)
Jul  4 09:18:10.086: INFO: (12) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname2/proxy/: bar (200; 5.027885ms)
Jul  4 09:18:10.086: INFO: (12) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname2/proxy/: tls qux (200; 5.253616ms)
Jul  4 09:18:10.087: INFO: (12) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname1/proxy/: tls baz (200; 5.339961ms)
Jul  4 09:18:10.087: INFO: (12) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname2/proxy/: bar (200; 5.44869ms)
Jul  4 09:18:10.087: INFO: (12) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname1/proxy/: foo (200; 5.314756ms)
Jul  4 09:18:10.089: INFO: (13) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 2.381486ms)
Jul  4 09:18:10.090: INFO: (13) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname2/proxy/: bar (200; 3.214068ms)
Jul  4 09:18:10.090: INFO: (13) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:460/proxy/: tls baz (200; 3.276226ms)
Jul  4 09:18:10.090: INFO: (13) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 3.233485ms)
Jul  4 09:18:10.090: INFO: (13) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 3.340188ms)
Jul  4 09:18:10.090: INFO: (13) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:1080/proxy/: test<... (200; 3.268924ms)
Jul  4 09:18:10.090: INFO: (13) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf/proxy/: test (200; 3.353273ms)
Jul  4 09:18:10.090: INFO: (13) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 3.609548ms)
Jul  4 09:18:10.091: INFO: (13) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:462/proxy/: tls qux (200; 3.853158ms)
Jul  4 09:18:10.091: INFO: (13) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname1/proxy/: foo (200; 3.924648ms)
Jul  4 09:18:10.091: INFO: (13) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:1080/proxy/: ... (200; 3.867193ms)
Jul  4 09:18:10.091: INFO: (13) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname2/proxy/: bar (200; 3.919685ms)
Jul  4 09:18:10.091: INFO: (13) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname1/proxy/: foo (200; 4.322322ms)
Jul  4 09:18:10.091: INFO: (13) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:443/proxy/: test (200; 3.24692ms)
Jul  4 09:18:10.096: INFO: (14) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 3.875754ms)
Jul  4 09:18:10.096: INFO: (14) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:462/proxy/: tls qux (200; 4.068919ms)
Jul  4 09:18:10.096: INFO: (14) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 4.033276ms)
Jul  4 09:18:10.096: INFO: (14) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 4.051002ms)
Jul  4 09:18:10.097: INFO: (14) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 4.233384ms)
Jul  4 09:18:10.097: INFO: (14) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:1080/proxy/: ... (200; 4.275614ms)
Jul  4 09:18:10.097: INFO: (14) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:1080/proxy/: test<... (200; 4.222887ms)
Jul  4 09:18:10.097: INFO: (14) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname1/proxy/: foo (200; 5.158148ms)
Jul  4 09:18:10.098: INFO: (14) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname2/proxy/: tls qux (200; 5.23295ms)
Jul  4 09:18:10.098: INFO: (14) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname2/proxy/: bar (200; 5.29839ms)
Jul  4 09:18:10.098: INFO: (14) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname2/proxy/: bar (200; 5.236014ms)
Jul  4 09:18:10.098: INFO: (14) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname1/proxy/: tls baz (200; 5.315059ms)
Jul  4 09:18:10.098: INFO: (14) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname1/proxy/: foo (200; 5.389893ms)
Jul  4 09:18:10.100: INFO: (15) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:1080/proxy/: ... (200; 2.64799ms)
Jul  4 09:18:10.101: INFO: (15) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:1080/proxy/: test<... (200; 2.924093ms)
Jul  4 09:18:10.101: INFO: (15) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:443/proxy/: test (200; 3.97153ms)
Jul  4 09:18:10.102: INFO: (15) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 4.036038ms)
Jul  4 09:18:10.102: INFO: (15) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 4.13205ms)
Jul  4 09:18:10.102: INFO: (15) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname1/proxy/: foo (200; 4.377952ms)
Jul  4 09:18:10.103: INFO: (15) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname1/proxy/: foo (200; 4.616611ms)
Jul  4 09:18:10.103: INFO: (15) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname2/proxy/: bar (200; 4.62976ms)
Jul  4 09:18:10.103: INFO: (15) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname1/proxy/: tls baz (200; 4.638163ms)
Jul  4 09:18:10.103: INFO: (15) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname2/proxy/: tls qux (200; 4.719996ms)
Jul  4 09:18:10.106: INFO: (16) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:1080/proxy/: ... (200; 2.795864ms)
Jul  4 09:18:10.107: INFO: (16) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 3.923555ms)
Jul  4 09:18:10.107: INFO: (16) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 4.012542ms)
Jul  4 09:18:10.107: INFO: (16) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 4.21099ms)
Jul  4 09:18:10.107: INFO: (16) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:460/proxy/: tls baz (200; 4.268599ms)
Jul  4 09:18:10.107: INFO: (16) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 4.361898ms)
Jul  4 09:18:10.107: INFO: (16) /api/v1/namespaces/proxy-6528/services/proxy-service-h4b95:portname1/proxy/: foo (200; 4.341148ms)
Jul  4 09:18:10.107: INFO: (16) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf/proxy/: test (200; 4.337652ms)
Jul  4 09:18:10.107: INFO: (16) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:443/proxy/: test<... (200; 5.162243ms)
Jul  4 09:18:10.111: INFO: (17) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 3.074114ms)
Jul  4 09:18:10.111: INFO: (17) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 3.088383ms)
Jul  4 09:18:10.111: INFO: (17) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:460/proxy/: tls baz (200; 3.408965ms)
Jul  4 09:18:10.111: INFO: (17) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:1080/proxy/: ... (200; 3.353454ms)
Jul  4 09:18:10.112: INFO: (17) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 3.473918ms)
Jul  4 09:18:10.112: INFO: (17) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:443/proxy/: test (200; 4.430443ms)
Jul  4 09:18:10.113: INFO: (17) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:1080/proxy/: test<... (200; 4.407905ms)
Jul  4 09:18:10.113: INFO: (17) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname1/proxy/: tls baz (200; 4.944608ms)
Jul  4 09:18:10.123: INFO: (18) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf/proxy/: test (200; 9.502686ms)
Jul  4 09:18:10.123: INFO: (18) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:1080/proxy/: ... (200; 9.507599ms)
Jul  4 09:18:10.123: INFO: (18) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 9.594052ms)
Jul  4 09:18:10.123: INFO: (18) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 9.559922ms)
Jul  4 09:18:10.123: INFO: (18) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname2/proxy/: bar (200; 9.48993ms)
Jul  4 09:18:10.123: INFO: (18) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname2/proxy/: tls qux (200; 9.609009ms)
Jul  4 09:18:10.123: INFO: (18) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:1080/proxy/: test<... (200; 9.538ms)
Jul  4 09:18:10.123: INFO: (18) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf:160/proxy/: foo (200; 9.834144ms)
Jul  4 09:18:10.123: INFO: (18) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:460/proxy/: tls baz (200; 9.807348ms)
Jul  4 09:18:10.123: INFO: (18) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 9.874479ms)
Jul  4 09:18:10.123: INFO: (18) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:462/proxy/: tls qux (200; 9.906666ms)
Jul  4 09:18:10.123: INFO: (18) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:443/proxy/: ... (200; 2.205857ms)
Jul  4 09:18:10.126: INFO: (19) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:443/proxy/: test<... (200; 4.529151ms)
Jul  4 09:18:10.128: INFO: (19) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:460/proxy/: tls baz (200; 4.596696ms)
Jul  4 09:18:10.129: INFO: (19) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname2/proxy/: bar (200; 4.851345ms)
Jul  4 09:18:10.129: INFO: (19) /api/v1/namespaces/proxy-6528/pods/http:proxy-service-h4b95-jgbbf:162/proxy/: bar (200; 5.210645ms)
Jul  4 09:18:10.129: INFO: (19) /api/v1/namespaces/proxy-6528/services/http:proxy-service-h4b95:portname1/proxy/: foo (200; 5.184909ms)
Jul  4 09:18:10.129: INFO: (19) /api/v1/namespaces/proxy-6528/pods/proxy-service-h4b95-jgbbf/proxy/: test (200; 5.131216ms)
Jul  4 09:18:10.129: INFO: (19) /api/v1/namespaces/proxy-6528/pods/https:proxy-service-h4b95-jgbbf:462/proxy/: tls qux (200; 5.202566ms)
Jul  4 09:18:10.129: INFO: (19) /api/v1/namespaces/proxy-6528/services/https:proxy-service-h4b95:tlsportname1/proxy/: tls baz (200; 5.378295ms)
STEP: deleting ReplicationController proxy-service-h4b95 in namespace proxy-6528, will wait for the garbage collector to delete the pods
Jul  4 09:18:10.186: INFO: Deleting ReplicationController proxy-service-h4b95 took: 5.418443ms
Jul  4 09:18:10.487: INFO: Terminating ReplicationController proxy-service-h4b95 pods took: 300.212622ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:18:16.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6528" for this suite.

• [SLOW TEST:12.473 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":175,"skipped":2928,"failed":0}
SSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:18:16.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-494a06d3-9b25-4654-b81c-48714780b46d in namespace container-probe-2146
Jul  4 09:18:20.489: INFO: Started pod busybox-494a06d3-9b25-4654-b81c-48714780b46d in namespace container-probe-2146
STEP: checking the pod's current state and verifying that restartCount is present
Jul  4 09:18:20.492: INFO: Initial restart count of pod busybox-494a06d3-9b25-4654-b81c-48714780b46d is 0
Jul  4 09:19:14.902: INFO: Restart count of pod container-probe-2146/busybox-494a06d3-9b25-4654-b81c-48714780b46d is now 1 (54.410750523s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:19:14.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2146" for this suite.

• [SLOW TEST:58.668 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2932,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:19:14.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:19:21.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6153" for this suite.

• [SLOW TEST:6.289 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2941,"failed":0}
S
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:19:21.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Jul  4 09:19:26.393: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7737 pod-service-account-82b5d7c0-5789-42c2-9352-a4a146e7f5ca -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jul  4 09:19:26.630: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7737 pod-service-account-82b5d7c0-5789-42c2-9352-a4a146e7f5ca -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jul  4 09:19:27.180: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7737 pod-service-account-82b5d7c0-5789-42c2-9352-a4a146e7f5ca -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:19:27.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7737" for this suite.

• [SLOW TEST:6.162 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":178,"skipped":2942,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:19:27.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  4 09:19:27.623: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4bda3d61-25d4-473a-8a38-4c2f7551912a" in namespace "projected-1630" to be "success or failure"
Jul  4 09:19:27.683: INFO: Pod "downwardapi-volume-4bda3d61-25d4-473a-8a38-4c2f7551912a": Phase="Pending", Reason="", readiness=false. Elapsed: 60.106956ms
Jul  4 09:19:29.716: INFO: Pod "downwardapi-volume-4bda3d61-25d4-473a-8a38-4c2f7551912a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092459093s
Jul  4 09:19:31.731: INFO: Pod "downwardapi-volume-4bda3d61-25d4-473a-8a38-4c2f7551912a": Phase="Running", Reason="", readiness=true. Elapsed: 4.107944256s
Jul  4 09:19:33.735: INFO: Pod "downwardapi-volume-4bda3d61-25d4-473a-8a38-4c2f7551912a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.11154458s
STEP: Saw pod success
Jul  4 09:19:33.735: INFO: Pod "downwardapi-volume-4bda3d61-25d4-473a-8a38-4c2f7551912a" satisfied condition "success or failure"
Jul  4 09:19:33.737: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-4bda3d61-25d4-473a-8a38-4c2f7551912a container client-container: 
STEP: delete the pod
Jul  4 09:19:33.821: INFO: Waiting for pod downwardapi-volume-4bda3d61-25d4-473a-8a38-4c2f7551912a to disappear
Jul  4 09:19:33.831: INFO: Pod downwardapi-volume-4bda3d61-25d4-473a-8a38-4c2f7551912a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:19:33.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1630" for this suite.

• [SLOW TEST:6.444 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2946,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:19:33.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  4 09:19:34.773: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  4 09:19:36.783: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451174, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451174, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451174, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451174, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:19:38.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451174, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451174, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451174, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451174, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:19:40.812: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451174, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451174, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451174, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451174, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  4 09:19:44.158: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:19:44.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-922" for this suite.
STEP: Destroying namespace "webhook-922-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.541 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":180,"skipped":2954,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:19:45.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-105dd922-a512-4c2f-8af3-dbfe5a78b1d8
STEP: Creating a pod to test consume configMaps
Jul  4 09:19:45.653: INFO: Waiting up to 5m0s for pod "pod-configmaps-5bf92bec-f8e3-4df9-8ed2-c59f7ab71d64" in namespace "configmap-1635" to be "success or failure"
Jul  4 09:19:45.670: INFO: Pod "pod-configmaps-5bf92bec-f8e3-4df9-8ed2-c59f7ab71d64": Phase="Pending", Reason="", readiness=false. Elapsed: 17.155467ms
Jul  4 09:19:47.698: INFO: Pod "pod-configmaps-5bf92bec-f8e3-4df9-8ed2-c59f7ab71d64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045526708s
Jul  4 09:19:49.712: INFO: Pod "pod-configmaps-5bf92bec-f8e3-4df9-8ed2-c59f7ab71d64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059695568s
Jul  4 09:19:51.717: INFO: Pod "pod-configmaps-5bf92bec-f8e3-4df9-8ed2-c59f7ab71d64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064074775s
STEP: Saw pod success
Jul  4 09:19:51.717: INFO: Pod "pod-configmaps-5bf92bec-f8e3-4df9-8ed2-c59f7ab71d64" satisfied condition "success or failure"
Jul  4 09:19:51.720: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-5bf92bec-f8e3-4df9-8ed2-c59f7ab71d64 container configmap-volume-test: 
STEP: delete the pod
Jul  4 09:19:51.908: INFO: Waiting for pod pod-configmaps-5bf92bec-f8e3-4df9-8ed2-c59f7ab71d64 to disappear
Jul  4 09:19:52.000: INFO: Pod pod-configmaps-5bf92bec-f8e3-4df9-8ed2-c59f7ab71d64 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:19:52.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1635" for this suite.

• [SLOW TEST:6.784 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2957,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:19:52.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  4 09:19:56.127: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  4 09:19:58.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451196, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451196, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451196, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451195, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:20:00.432: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451196, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451196, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451196, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451195, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:20:02.632: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451196, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451196, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451196, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451195, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:20:04.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451196, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451196, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451196, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451195, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  4 09:20:07.629: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:20:08.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3429" for this suite.
STEP: Destroying namespace "webhook-3429-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.642 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":182,"skipped":2958,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:20:08.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-7932
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-7932
I0704 09:20:09.087391       6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7932, replica count: 2
I0704 09:20:12.137829       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0704 09:20:15.138115       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0704 09:20:18.138364       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0704 09:20:21.138582       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0704 09:20:24.138863       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0704 09:20:27.139108       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0704 09:20:30.139356       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0704 09:20:33.139597       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  4 09:20:33.139: INFO: Creating new exec pod
Jul  4 09:20:40.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7932 execpodrt4qk -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jul  4 09:20:40.495: INFO: stderr: "I0704 09:20:40.430849    3174 log.go:172] (0xc0006e6630) (0xc00078e1e0) Create stream\nI0704 09:20:40.430896    3174 log.go:172] (0xc0006e6630) (0xc00078e1e0) Stream added, broadcasting: 1\nI0704 09:20:40.434152    3174 log.go:172] (0xc0006e6630) Reply frame received for 1\nI0704 09:20:40.434178    3174 log.go:172] (0xc0006e6630) (0xc00078e280) Create stream\nI0704 09:20:40.434189    3174 log.go:172] (0xc0006e6630) (0xc00078e280) Stream added, broadcasting: 3\nI0704 09:20:40.437001    3174 log.go:172] (0xc0006e6630) Reply frame received for 3\nI0704 09:20:40.437037    3174 log.go:172] (0xc0006e6630) (0xc00057c000) Create stream\nI0704 09:20:40.437053    3174 log.go:172] (0xc0006e6630) (0xc00057c000) Stream added, broadcasting: 5\nI0704 09:20:40.438189    3174 log.go:172] (0xc0006e6630) Reply frame received for 5\nI0704 09:20:40.489085    3174 log.go:172] (0xc0006e6630) Data frame received for 5\nI0704 09:20:40.489105    3174 log.go:172] (0xc00057c000) (5) Data frame handling\nI0704 09:20:40.489253    3174 log.go:172] (0xc00057c000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0704 09:20:40.490085    3174 log.go:172] (0xc0006e6630) Data frame received for 5\nI0704 09:20:40.490110    3174 log.go:172] (0xc00057c000) (5) Data frame handling\nI0704 09:20:40.490160    3174 log.go:172] (0xc00057c000) (5) Data frame sent\nI0704 09:20:40.490197    3174 log.go:172] (0xc0006e6630) Data frame received for 5\nI0704 09:20:40.490237    3174 log.go:172] (0xc00057c000) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0704 09:20:40.490440    3174 log.go:172] (0xc0006e6630) Data frame received for 3\nI0704 09:20:40.490452    3174 log.go:172] (0xc00078e280) (3) Data frame handling\nI0704 09:20:40.491641    3174 log.go:172] (0xc0006e6630) Data frame received for 1\nI0704 09:20:40.491654    3174 log.go:172] (0xc00078e1e0) (1) Data frame handling\nI0704 09:20:40.491660    3174 log.go:172] (0xc00078e1e0) (1) Data frame sent\nI0704 09:20:40.491881    3174 log.go:172] (0xc0006e6630) (0xc00078e1e0) Stream removed, broadcasting: 1\nI0704 09:20:40.492008    3174 log.go:172] (0xc0006e6630) Go away received\nI0704 09:20:40.492312    3174 log.go:172] (0xc0006e6630) (0xc00078e1e0) Stream removed, broadcasting: 1\nI0704 09:20:40.492344    3174 log.go:172] (0xc0006e6630) (0xc00078e280) Stream removed, broadcasting: 3\nI0704 09:20:40.492364    3174 log.go:172] (0xc0006e6630) (0xc00057c000) Stream removed, broadcasting: 5\n"
Jul  4 09:20:40.496: INFO: stdout: ""
Jul  4 09:20:40.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7932 execpodrt4qk -- /bin/sh -x -c nc -zv -t -w 2 10.108.19.124 80'
Jul  4 09:20:40.664: INFO: stderr: "I0704 09:20:40.603521    3197 log.go:172] (0xc0007d09a0) (0xc0007c2140) Create stream\nI0704 09:20:40.603570    3197 log.go:172] (0xc0007d09a0) (0xc0007c2140) Stream added, broadcasting: 1\nI0704 09:20:40.606058    3197 log.go:172] (0xc0007d09a0) Reply frame received for 1\nI0704 09:20:40.606086    3197 log.go:172] (0xc0007d09a0) (0xc0005cc640) Create stream\nI0704 09:20:40.606094    3197 log.go:172] (0xc0007d09a0) (0xc0005cc640) Stream added, broadcasting: 3\nI0704 09:20:40.606736    3197 log.go:172] (0xc0007d09a0) Reply frame received for 3\nI0704 09:20:40.606753    3197 log.go:172] (0xc0007d09a0) (0xc0007c21e0) Create stream\nI0704 09:20:40.606759    3197 log.go:172] (0xc0007d09a0) (0xc0007c21e0) Stream added, broadcasting: 5\nI0704 09:20:40.607366    3197 log.go:172] (0xc0007d09a0) Reply frame received for 5\nI0704 09:20:40.658950    3197 log.go:172] (0xc0007d09a0) Data frame received for 5\nI0704 09:20:40.658978    3197 log.go:172] (0xc0007c21e0) (5) Data frame handling\nI0704 09:20:40.658988    3197 log.go:172] (0xc0007c21e0) (5) Data frame sent\nI0704 09:20:40.658995    3197 log.go:172] (0xc0007d09a0) Data frame received for 5\nI0704 09:20:40.659003    3197 log.go:172] (0xc0007c21e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.19.124 80\nConnection to 10.108.19.124 80 port [tcp/http] succeeded!\nI0704 09:20:40.659021    3197 log.go:172] (0xc0007d09a0) Data frame received for 3\nI0704 09:20:40.659028    3197 log.go:172] (0xc0005cc640) (3) Data frame handling\nI0704 09:20:40.660007    3197 log.go:172] (0xc0007d09a0) Data frame received for 1\nI0704 09:20:40.660019    3197 log.go:172] (0xc0007c2140) (1) Data frame handling\nI0704 09:20:40.660027    3197 log.go:172] (0xc0007c2140) (1) Data frame sent\nI0704 09:20:40.660037    3197 log.go:172] (0xc0007d09a0) (0xc0007c2140) Stream removed, broadcasting: 1\nI0704 09:20:40.660280    3197 log.go:172] (0xc0007d09a0) (0xc0007c2140) Stream removed, broadcasting: 1\nI0704 09:20:40.660293    3197 log.go:172] (0xc0007d09a0) (0xc0005cc640) Stream removed, broadcasting: 3\nI0704 09:20:40.660301    3197 log.go:172] (0xc0007d09a0) (0xc0007c21e0) Stream removed, broadcasting: 5\n"
Jul  4 09:20:40.665: INFO: stdout: ""
Jul  4 09:20:40.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7932 execpodrt4qk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31136'
Jul  4 09:20:40.835: INFO: stderr: "I0704 09:20:40.787164    3218 log.go:172] (0xc000b87550) (0xc00051be00) Create stream\nI0704 09:20:40.787202    3218 log.go:172] (0xc000b87550) (0xc00051be00) Stream added, broadcasting: 1\nI0704 09:20:40.788977    3218 log.go:172] (0xc000b87550) Reply frame received for 1\nI0704 09:20:40.788998    3218 log.go:172] (0xc000b87550) (0xc00051bea0) Create stream\nI0704 09:20:40.789005    3218 log.go:172] (0xc000b87550) (0xc00051bea0) Stream added, broadcasting: 3\nI0704 09:20:40.789612    3218 log.go:172] (0xc000b87550) Reply frame received for 3\nI0704 09:20:40.789635    3218 log.go:172] (0xc000b87550) (0xc00051bf40) Create stream\nI0704 09:20:40.789643    3218 log.go:172] (0xc000b87550) (0xc00051bf40) Stream added, broadcasting: 5\nI0704 09:20:40.790374    3218 log.go:172] (0xc000b87550) Reply frame received for 5\nI0704 09:20:40.830999    3218 log.go:172] (0xc000b87550) Data frame received for 5\nI0704 09:20:40.831021    3218 log.go:172] (0xc00051bf40) (5) Data frame handling\nI0704 09:20:40.831033    3218 log.go:172] (0xc00051bf40) (5) Data frame sent\nI0704 09:20:40.831041    3218 log.go:172] (0xc000b87550) Data frame received for 5\nI0704 09:20:40.831047    3218 log.go:172] (0xc00051bf40) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 31136\nConnection to 172.17.0.10 31136 port [tcp/31136] succeeded!\nI0704 09:20:40.831096    3218 log.go:172] (0xc00051bf40) (5) Data frame sent\nI0704 09:20:40.831199    3218 log.go:172] (0xc000b87550) Data frame received for 3\nI0704 09:20:40.831253    3218 log.go:172] (0xc00051bea0) (3) Data frame handling\nI0704 09:20:40.831443    3218 log.go:172] (0xc000b87550) Data frame received for 5\nI0704 09:20:40.831456    3218 log.go:172] (0xc00051bf40) (5) Data frame handling\nI0704 09:20:40.832160    3218 log.go:172] (0xc000b87550) Data frame received for 1\nI0704 09:20:40.832172    3218 log.go:172] (0xc00051be00) (1) Data frame handling\nI0704 09:20:40.832180    3218 log.go:172] (0xc00051be00) (1) Data frame sent\nI0704 09:20:40.832188    3218 log.go:172] (0xc000b87550) (0xc00051be00) Stream removed, broadcasting: 1\nI0704 09:20:40.832199    3218 log.go:172] (0xc000b87550) Go away received\nI0704 09:20:40.832419    3218 log.go:172] (0xc000b87550) (0xc00051be00) Stream removed, broadcasting: 1\nI0704 09:20:40.832429    3218 log.go:172] (0xc000b87550) (0xc00051bea0) Stream removed, broadcasting: 3\nI0704 09:20:40.832435    3218 log.go:172] (0xc000b87550) (0xc00051bf40) Stream removed, broadcasting: 5\n"
Jul  4 09:20:40.835: INFO: stdout: ""
Jul  4 09:20:40.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7932 execpodrt4qk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31136'
Jul  4 09:20:40.988: INFO: stderr: "I0704 09:20:40.943320    3238 log.go:172] (0xc000a1ad10) (0xc0008d6280) Create stream\nI0704 09:20:40.943355    3238 log.go:172] (0xc000a1ad10) (0xc0008d6280) Stream added, broadcasting: 1\nI0704 09:20:40.945373    3238 log.go:172] (0xc000a1ad10) Reply frame received for 1\nI0704 09:20:40.945400    3238 log.go:172] (0xc000a1ad10) (0xc0008d6320) Create stream\nI0704 09:20:40.945407    3238 log.go:172] (0xc000a1ad10) (0xc0008d6320) Stream added, broadcasting: 3\nI0704 09:20:40.946006    3238 log.go:172] (0xc000a1ad10) Reply frame received for 3\nI0704 09:20:40.946027    3238 log.go:172] (0xc000a1ad10) (0xc0008d63c0) Create stream\nI0704 09:20:40.946034    3238 log.go:172] (0xc000a1ad10) (0xc0008d63c0) Stream added, broadcasting: 5\nI0704 09:20:40.946620    3238 log.go:172] (0xc000a1ad10) Reply frame received for 5\nI0704 09:20:40.984119    3238 log.go:172] (0xc000a1ad10) Data frame received for 5\nI0704 09:20:40.984146    3238 log.go:172] (0xc0008d63c0) (5) Data frame handling\nI0704 09:20:40.984156    3238 log.go:172] (0xc0008d63c0) (5) Data frame sent\nI0704 09:20:40.984166    3238 log.go:172] (0xc000a1ad10) Data frame received for 5\nI0704 09:20:40.984173    3238 log.go:172] (0xc0008d63c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 31136\nConnection to 172.17.0.8 31136 port [tcp/31136] succeeded!\nI0704 09:20:40.984190    3238 log.go:172] (0xc000a1ad10) Data frame received for 3\nI0704 09:20:40.984197    3238 log.go:172] (0xc0008d6320) (3) Data frame handling\nI0704 09:20:40.984631    3238 log.go:172] (0xc000a1ad10) Data frame received for 1\nI0704 09:20:40.984642    3238 log.go:172] (0xc0008d6280) (1) Data frame handling\nI0704 09:20:40.984650    3238 log.go:172] (0xc0008d6280) (1) Data frame sent\nI0704 09:20:40.984737    3238 log.go:172] (0xc000a1ad10) (0xc0008d6280) Stream removed, broadcasting: 1\nI0704 09:20:40.984760    3238 log.go:172] (0xc000a1ad10) Go away received\nI0704 09:20:40.985258    3238 log.go:172] (0xc000a1ad10) (0xc0008d6280) Stream removed, broadcasting: 1\nI0704 09:20:40.985278    3238 log.go:172] (0xc000a1ad10) (0xc0008d6320) Stream removed, broadcasting: 3\nI0704 09:20:40.985290    3238 log.go:172] (0xc000a1ad10) (0xc0008d63c0) Stream removed, broadcasting: 5\n"
Jul  4 09:20:40.988: INFO: stdout: ""
Jul  4 09:20:40.988: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:20:41.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7932" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:32.240 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":183,"skipped":3001,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:20:41.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:20:41.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul  4 09:20:43.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8022 create -f -'
Jul  4 09:21:06.048: INFO: stderr: ""
Jul  4 09:21:06.048: INFO: stdout: "e2e-test-crd-publish-openapi-2317-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jul  4 09:21:06.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8022 delete e2e-test-crd-publish-openapi-2317-crds test-cr'
Jul  4 09:21:06.180: INFO: stderr: ""
Jul  4 09:21:06.180: INFO: stdout: "e2e-test-crd-publish-openapi-2317-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Jul  4 09:21:06.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8022 apply -f -'
Jul  4 09:21:06.430: INFO: stderr: ""
Jul  4 09:21:06.430: INFO: stdout: "e2e-test-crd-publish-openapi-2317-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jul  4 09:21:06.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8022 delete e2e-test-crd-publish-openapi-2317-crds test-cr'
Jul  4 09:21:06.539: INFO: stderr: ""
Jul  4 09:21:06.539: INFO: stdout: "e2e-test-crd-publish-openapi-2317-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jul  4 09:21:06.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2317-crds'
Jul  4 09:21:06.767: INFO: stderr: ""
Jul  4 09:21:06.767: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2317-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:21:08.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8022" for this suite.

• [SLOW TEST:27.588 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":184,"skipped":3014,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:21:08.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:21:14.879: INFO: Waiting up to 5m0s for pod "client-envvars-34f8b29d-01c9-4108-897b-7ab6416ef5e8" in namespace "pods-923" to be "success or failure"
Jul  4 09:21:14.912: INFO: Pod "client-envvars-34f8b29d-01c9-4108-897b-7ab6416ef5e8": Phase="Pending", Reason="", readiness=false. Elapsed: 32.934003ms
Jul  4 09:21:16.915: INFO: Pod "client-envvars-34f8b29d-01c9-4108-897b-7ab6416ef5e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036518787s
Jul  4 09:21:18.920: INFO: Pod "client-envvars-34f8b29d-01c9-4108-897b-7ab6416ef5e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040880389s
Jul  4 09:21:20.923: INFO: Pod "client-envvars-34f8b29d-01c9-4108-897b-7ab6416ef5e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044483676s
STEP: Saw pod success
Jul  4 09:21:20.923: INFO: Pod "client-envvars-34f8b29d-01c9-4108-897b-7ab6416ef5e8" satisfied condition "success or failure"
Jul  4 09:21:20.925: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-34f8b29d-01c9-4108-897b-7ab6416ef5e8 container env3cont: 
STEP: delete the pod
Jul  4 09:21:20.997: INFO: Waiting for pod client-envvars-34f8b29d-01c9-4108-897b-7ab6416ef5e8 to disappear
Jul  4 09:21:21.070: INFO: Pod client-envvars-34f8b29d-01c9-4108-897b-7ab6416ef5e8 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:21:21.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-923" for this suite.

• [SLOW TEST:12.421 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3026,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:21:21.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0704 09:21:22.444664       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  4 09:21:22.444: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:21:22.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5920" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":186,"skipped":3034,"failed":0}
SS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:21:22.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1174
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-1174
STEP: creating replication controller externalsvc in namespace services-1174
I0704 09:21:23.493925       6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-1174, replica count: 2
I0704 09:21:26.544345       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0704 09:21:29.544664       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Jul  4 09:21:29.580: INFO: Creating new exec pod
Jul  4 09:21:33.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1174 execpodln6xc -- /bin/sh -x -c nslookup clusterip-service'
Jul  4 09:21:33.821: INFO: stderr: "I0704 09:21:33.744642    3368 log.go:172] (0xc000024dc0) (0xc0006fd9a0) Create stream\nI0704 09:21:33.744692    3368 log.go:172] (0xc000024dc0) (0xc0006fd9a0) Stream added, broadcasting: 1\nI0704 09:21:33.746858    3368 log.go:172] (0xc000024dc0) Reply frame received for 1\nI0704 09:21:33.746886    3368 log.go:172] (0xc000024dc0) (0xc000990000) Create stream\nI0704 09:21:33.746894    3368 log.go:172] (0xc000024dc0) (0xc000990000) Stream added, broadcasting: 3\nI0704 09:21:33.747711    3368 log.go:172] (0xc000024dc0) Reply frame received for 3\nI0704 09:21:33.747751    3368 log.go:172] (0xc000024dc0) (0xc0006fdb80) Create stream\nI0704 09:21:33.747764    3368 log.go:172] (0xc000024dc0) (0xc0006fdb80) Stream added, broadcasting: 5\nI0704 09:21:33.748475    3368 log.go:172] (0xc000024dc0) Reply frame received for 5\nI0704 09:21:33.802577    3368 log.go:172] (0xc000024dc0) Data frame received for 5\nI0704 09:21:33.802610    3368 log.go:172] (0xc0006fdb80) (5) Data frame handling\nI0704 09:21:33.802637    3368 log.go:172] (0xc0006fdb80) (5) Data frame sent\n+ nslookup clusterip-service\nI0704 09:21:33.813818    3368 log.go:172] (0xc000024dc0) Data frame received for 3\nI0704 09:21:33.813845    3368 log.go:172] (0xc000990000) (3) Data frame handling\nI0704 09:21:33.813866    3368 log.go:172] (0xc000990000) (3) Data frame sent\nI0704 09:21:33.814900    3368 log.go:172] (0xc000024dc0) Data frame received for 3\nI0704 09:21:33.814928    3368 log.go:172] (0xc000990000) (3) Data frame handling\nI0704 09:21:33.814942    3368 log.go:172] (0xc000990000) (3) Data frame sent\nI0704 09:21:33.815239    3368 log.go:172] (0xc000024dc0) Data frame received for 5\nI0704 09:21:33.815252    3368 log.go:172] (0xc0006fdb80) (5) Data frame handling\nI0704 09:21:33.815272    3368 log.go:172] (0xc000024dc0) Data frame received for 3\nI0704 09:21:33.815291    3368 log.go:172] (0xc000990000) (3) Data frame handling\nI0704 09:21:33.817610    3368 log.go:172] (0xc000024dc0) Data frame received for 1\nI0704 09:21:33.817638    3368 log.go:172] (0xc0006fd9a0) (1) Data frame handling\nI0704 09:21:33.817651    3368 log.go:172] (0xc0006fd9a0) (1) Data frame sent\nI0704 09:21:33.817668    3368 log.go:172] (0xc000024dc0) (0xc0006fd9a0) Stream removed, broadcasting: 1\nI0704 09:21:33.817928    3368 log.go:172] (0xc000024dc0) Go away received\nI0704 09:21:33.818094    3368 log.go:172] (0xc000024dc0) (0xc0006fd9a0) Stream removed, broadcasting: 1\nI0704 09:21:33.818120    3368 log.go:172] (0xc000024dc0) (0xc000990000) Stream removed, broadcasting: 3\nI0704 09:21:33.818133    3368 log.go:172] (0xc000024dc0) (0xc0006fdb80) Stream removed, broadcasting: 5\n"
Jul  4 09:21:33.822: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-1174.svc.cluster.local\tcanonical name = externalsvc.services-1174.svc.cluster.local.\nName:\texternalsvc.services-1174.svc.cluster.local\nAddress: 10.109.184.180\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-1174, will wait for the garbage collector to delete the pods
Jul  4 09:21:33.882: INFO: Deleting ReplicationController externalsvc took: 6.676927ms
Jul  4 09:21:34.182: INFO: Terminating ReplicationController externalsvc pods took: 300.375897ms
Jul  4 09:21:43.127: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:21:43.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1174" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:21.213 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":187,"skipped":3036,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:21:43.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jul  4 09:21:54.187: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:21:56.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-767" for this suite.

• [SLOW TEST:13.186 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":188,"skipped":3056,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:21:56.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  4 09:22:00.290: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3b9fede-c775-41bf-aa3a-c18636be9e97" in namespace "projected-8685" to be "success or failure"
Jul  4 09:22:01.227: INFO: Pod "downwardapi-volume-f3b9fede-c775-41bf-aa3a-c18636be9e97": Phase="Pending", Reason="", readiness=false. Elapsed: 937.158217ms
Jul  4 09:22:03.287: INFO: Pod "downwardapi-volume-f3b9fede-c775-41bf-aa3a-c18636be9e97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.997603028s
Jul  4 09:22:06.085: INFO: Pod "downwardapi-volume-f3b9fede-c775-41bf-aa3a-c18636be9e97": Phase="Pending", Reason="", readiness=false. Elapsed: 5.795357621s
Jul  4 09:22:08.685: INFO: Pod "downwardapi-volume-f3b9fede-c775-41bf-aa3a-c18636be9e97": Phase="Pending", Reason="", readiness=false. Elapsed: 8.395650649s
Jul  4 09:22:11.317: INFO: Pod "downwardapi-volume-f3b9fede-c775-41bf-aa3a-c18636be9e97": Phase="Running", Reason="", readiness=true. Elapsed: 11.027458953s
Jul  4 09:22:13.343: INFO: Pod "downwardapi-volume-f3b9fede-c775-41bf-aa3a-c18636be9e97": Phase="Running", Reason="", readiness=true. Elapsed: 13.053644546s
Jul  4 09:22:15.563: INFO: Pod "downwardapi-volume-f3b9fede-c775-41bf-aa3a-c18636be9e97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.27342413s
STEP: Saw pod success
Jul  4 09:22:15.563: INFO: Pod "downwardapi-volume-f3b9fede-c775-41bf-aa3a-c18636be9e97" satisfied condition "success or failure"
Jul  4 09:22:15.566: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f3b9fede-c775-41bf-aa3a-c18636be9e97 container client-container: 
STEP: delete the pod
Jul  4 09:22:16.068: INFO: Waiting for pod downwardapi-volume-f3b9fede-c775-41bf-aa3a-c18636be9e97 to disappear
Jul  4 09:22:16.539: INFO: Pod downwardapi-volume-f3b9fede-c775-41bf-aa3a-c18636be9e97 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:22:16.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8685" for this suite.

• [SLOW TEST:19.982 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3101,"failed":0}
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:22:16.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-2448
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul  4 09:22:18.361: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul  4 09:23:01.295: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.141:8080/dial?request=hostname&protocol=udp&host=10.244.1.135&port=8081&tries=1'] Namespace:pod-network-test-2448 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 09:23:01.295: INFO: >>> kubeConfig: /root/.kube/config
I0704 09:23:01.415090       6 log.go:172] (0xc002d14840) (0xc0012bbea0) Create stream
I0704 09:23:01.415127       6 log.go:172] (0xc002d14840) (0xc0012bbea0) Stream added, broadcasting: 1
I0704 09:23:01.417351       6 log.go:172] (0xc002d14840) Reply frame received for 1
I0704 09:23:01.417394       6 log.go:172] (0xc002d14840) (0xc001a080a0) Create stream
I0704 09:23:01.417413       6 log.go:172] (0xc002d14840) (0xc001a080a0) Stream added, broadcasting: 3
I0704 09:23:01.418270       6 log.go:172] (0xc002d14840) Reply frame received for 3
I0704 09:23:01.418300       6 log.go:172] (0xc002d14840) (0xc001780780) Create stream
I0704 09:23:01.418315       6 log.go:172] (0xc002d14840) (0xc001780780) Stream added, broadcasting: 5
I0704 09:23:01.419206       6 log.go:172] (0xc002d14840) Reply frame received for 5
I0704 09:23:01.491812       6 log.go:172] (0xc002d14840) Data frame received for 3
I0704 09:23:01.491831       6 log.go:172] (0xc001a080a0) (3) Data frame handling
I0704 09:23:01.491842       6 log.go:172] (0xc001a080a0) (3) Data frame sent
I0704 09:23:01.492597       6 log.go:172] (0xc002d14840) Data frame received for 3
I0704 09:23:01.492623       6 log.go:172] (0xc001a080a0) (3) Data frame handling
I0704 09:23:01.492648       6 log.go:172] (0xc002d14840) Data frame received for 5
I0704 09:23:01.492672       6 log.go:172] (0xc001780780) (5) Data frame handling
I0704 09:23:01.494143       6 log.go:172] (0xc002d14840) Data frame received for 1
I0704 09:23:01.494189       6 log.go:172] (0xc0012bbea0) (1) Data frame handling
I0704 09:23:01.494249       6 log.go:172] (0xc0012bbea0) (1) Data frame sent
I0704 09:23:01.494274       6 log.go:172] (0xc002d14840) (0xc0012bbea0) Stream removed, broadcasting: 1
I0704 09:23:01.494293       6 log.go:172] (0xc002d14840) Go away received
I0704 09:23:01.494393       6 log.go:172] (0xc002d14840) (0xc0012bbea0) Stream removed, broadcasting: 1
I0704 09:23:01.494425       6 log.go:172] (0xc002d14840) (0xc001a080a0) Stream removed, broadcasting: 3
I0704 09:23:01.494442       6 log.go:172] (0xc002d14840) (0xc001780780) Stream removed, broadcasting: 5
Jul  4 09:23:01.494: INFO: Waiting for responses: map[]
Jul  4 09:23:01.641: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.141:8080/dial?request=hostname&protocol=udp&host=10.244.2.135&port=8081&tries=1'] Namespace:pod-network-test-2448 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 09:23:01.642: INFO: >>> kubeConfig: /root/.kube/config
I0704 09:23:01.900147       6 log.go:172] (0xc002c406e0) (0xc001780f00) Create stream
I0704 09:23:01.900181       6 log.go:172] (0xc002c406e0) (0xc001780f00) Stream added, broadcasting: 1
I0704 09:23:01.901630       6 log.go:172] (0xc002c406e0) Reply frame received for 1
I0704 09:23:01.901659       6 log.go:172] (0xc002c406e0) (0xc0022e30e0) Create stream
I0704 09:23:01.901667       6 log.go:172] (0xc002c406e0) (0xc0022e30e0) Stream added, broadcasting: 3
I0704 09:23:01.902262       6 log.go:172] (0xc002c406e0) Reply frame received for 3
I0704 09:23:01.902290       6 log.go:172] (0xc002c406e0) (0xc0012bbf40) Create stream
I0704 09:23:01.902302       6 log.go:172] (0xc002c406e0) (0xc0012bbf40) Stream added, broadcasting: 5
I0704 09:23:01.902922       6 log.go:172] (0xc002c406e0) Reply frame received for 5
I0704 09:23:01.964802       6 log.go:172] (0xc002c406e0) Data frame received for 3
I0704 09:23:01.964840       6 log.go:172] (0xc0022e30e0) (3) Data frame handling
I0704 09:23:01.964860       6 log.go:172] (0xc0022e30e0) (3) Data frame sent
I0704 09:23:01.965108       6 log.go:172] (0xc002c406e0) Data frame received for 3
I0704 09:23:01.965256       6 log.go:172] (0xc0022e30e0) (3) Data frame handling
I0704 09:23:01.965349       6 log.go:172] (0xc002c406e0) Data frame received for 5
I0704 09:23:01.965359       6 log.go:172] (0xc0012bbf40) (5) Data frame handling
I0704 09:23:01.966584       6 log.go:172] (0xc002c406e0) Data frame received for 1
I0704 09:23:01.966617       6 log.go:172] (0xc001780f00) (1) Data frame handling
I0704 09:23:01.966634       6 log.go:172] (0xc001780f00) (1) Data frame sent
I0704 09:23:01.966741       6 log.go:172] (0xc002c406e0) (0xc001780f00) Stream removed, broadcasting: 1
I0704 09:23:01.966819       6 log.go:172] (0xc002c406e0) Go away received
I0704 09:23:01.966849       6 log.go:172] (0xc002c406e0) (0xc001780f00) Stream removed, broadcasting: 1
I0704 09:23:01.966875       6 log.go:172] (0xc002c406e0) (0xc0022e30e0) Stream removed, broadcasting: 3
I0704 09:23:01.966892       6 log.go:172] (0xc002c406e0) (0xc0012bbf40) Stream removed, broadcasting: 5
Jul  4 09:23:01.966: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:23:01.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2448" for this suite.

• [SLOW TEST:45.140 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3104,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:23:01.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul  4 09:23:03.962: INFO: Waiting up to 5m0s for pod "pod-7efd3853-3be6-439d-b8ac-97cb0c09b2af" in namespace "emptydir-1674" to be "success or failure"
Jul  4 09:23:04.006: INFO: Pod "pod-7efd3853-3be6-439d-b8ac-97cb0c09b2af": Phase="Pending", Reason="", readiness=false. Elapsed: 44.264954ms
Jul  4 09:23:06.122: INFO: Pod "pod-7efd3853-3be6-439d-b8ac-97cb0c09b2af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160485805s
Jul  4 09:23:08.151: INFO: Pod "pod-7efd3853-3be6-439d-b8ac-97cb0c09b2af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188900766s
Jul  4 09:23:10.344: INFO: Pod "pod-7efd3853-3be6-439d-b8ac-97cb0c09b2af": Phase="Pending", Reason="", readiness=false. Elapsed: 6.382078719s
Jul  4 09:23:12.348: INFO: Pod "pod-7efd3853-3be6-439d-b8ac-97cb0c09b2af": Phase="Pending", Reason="", readiness=false. Elapsed: 8.385936372s
Jul  4 09:23:14.575: INFO: Pod "pod-7efd3853-3be6-439d-b8ac-97cb0c09b2af": Phase="Pending", Reason="", readiness=false. Elapsed: 10.613114223s
Jul  4 09:23:16.623: INFO: Pod "pod-7efd3853-3be6-439d-b8ac-97cb0c09b2af": Phase="Pending", Reason="", readiness=false. Elapsed: 12.661553059s
Jul  4 09:23:18.628: INFO: Pod "pod-7efd3853-3be6-439d-b8ac-97cb0c09b2af": Phase="Pending", Reason="", readiness=false. Elapsed: 14.666211087s
Jul  4 09:23:20.633: INFO: Pod "pod-7efd3853-3be6-439d-b8ac-97cb0c09b2af": Phase="Pending", Reason="", readiness=false. Elapsed: 16.671594332s
Jul  4 09:23:22.638: INFO: Pod "pod-7efd3853-3be6-439d-b8ac-97cb0c09b2af": Phase="Pending", Reason="", readiness=false. Elapsed: 18.675830331s
Jul  4 09:23:24.641: INFO: Pod "pod-7efd3853-3be6-439d-b8ac-97cb0c09b2af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.679068114s
STEP: Saw pod success
Jul  4 09:23:24.641: INFO: Pod "pod-7efd3853-3be6-439d-b8ac-97cb0c09b2af" satisfied condition "success or failure"
Jul  4 09:23:24.643: INFO: Trying to get logs from node jerma-worker pod pod-7efd3853-3be6-439d-b8ac-97cb0c09b2af container test-container: 
STEP: delete the pod
Jul  4 09:23:24.676: INFO: Waiting for pod pod-7efd3853-3be6-439d-b8ac-97cb0c09b2af to disappear
Jul  4 09:23:24.724: INFO: Pod pod-7efd3853-3be6-439d-b8ac-97cb0c09b2af no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:23:24.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1674" for this suite.

• [SLOW TEST:22.755 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3161,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:23:24.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-d937ba79-e85e-4f32-af2c-2c8439098641 in namespace container-probe-165
Jul  4 09:23:45.088: INFO: Started pod liveness-d937ba79-e85e-4f32-af2c-2c8439098641 in namespace container-probe-165
STEP: checking the pod's current state and verifying that restartCount is present
Jul  4 09:23:45.091: INFO: Initial restart count of pod liveness-d937ba79-e85e-4f32-af2c-2c8439098641 is 0
Jul  4 09:24:12.686: INFO: Restart count of pod container-probe-165/liveness-d937ba79-e85e-4f32-af2c-2c8439098641 is now 1 (27.594815554s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:24:13.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-165" for this suite.

• [SLOW TEST:48.932 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3170,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:24:13.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jul  4 09:24:14.190: INFO: >>> kubeConfig: /root/.kube/config
Jul  4 09:24:17.572: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:24:31.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3183" for this suite.

• [SLOW TEST:17.353 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":193,"skipped":3182,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:24:31.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jul  4 09:24:31.189: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  4 09:24:31.270: INFO: Waiting for terminating namespaces to be deleted...
Jul  4 09:24:31.272: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Jul  4 09:24:31.279: INFO: kindnet-gnxwn from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  4 09:24:31.279: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  4 09:24:31.279: INFO: bono-dcf7c7dd4-n5pqx from default started at 2020-07-04 09:22:26 +0000 UTC (2 container statuses recorded)
Jul  4 09:24:31.279: INFO: 	Container bono ready: false, restart count 0
Jul  4 09:24:31.279: INFO: 	Container tailer ready: false, restart count 0
Jul  4 09:24:31.279: INFO: etcd-6bb7795595-s5x2k from default started at 2020-07-04 09:22:26 +0000 UTC (1 container statuses recorded)
Jul  4 09:24:31.279: INFO: 	Container etcd ready: true, restart count 0
Jul  4 09:24:31.279: INFO: kube-proxy-8sp85 from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  4 09:24:31.279: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  4 09:24:31.279: INFO: homestead-prov-7dcf44c88c-7c8sc from default started at 2020-07-04 09:22:26 +0000 UTC (1 container statuses recorded)
Jul  4 09:24:31.279: INFO: 	Container homestead-prov ready: false, restart count 0
Jul  4 09:24:31.279: INFO: sprout-56cb7dc9d8-457zm from default started at 2020-07-04 09:22:27 +0000 UTC (2 container statuses recorded)
Jul  4 09:24:31.279: INFO: 	Container sprout ready: false, restart count 0
Jul  4 09:24:31.279: INFO: 	Container tailer ready: false, restart count 0
Jul  4 09:24:31.279: INFO: chronos-587c97bb-tt65w from default started at 2020-07-04 09:22:26 +0000 UTC (2 container statuses recorded)
Jul  4 09:24:31.279: INFO: 	Container chronos ready: false, restart count 0
Jul  4 09:24:31.279: INFO: 	Container tailer ready: false, restart count 0
Jul  4 09:24:31.279: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Jul  4 09:24:31.295: INFO: astaire-66c6bd87b-tw97c from default started at 2020-07-04 09:22:26 +0000 UTC (2 container statuses recorded)
Jul  4 09:24:31.295: INFO: 	Container astaire ready: false, restart count 0
Jul  4 09:24:31.295: INFO: 	Container tailer ready: false, restart count 0
Jul  4 09:24:31.295: INFO: kindnet-qg8qr from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  4 09:24:31.295: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  4 09:24:31.295: INFO: ellis-6558f497f5-8xl7d from default started at 2020-07-04 09:22:26 +0000 UTC (1 container statuses recorded)
Jul  4 09:24:31.295: INFO: 	Container ellis ready: false, restart count 0
Jul  4 09:24:31.295: INFO: ralf-7c4f496cfc-fvpp2 from default started at 2020-07-04 09:22:27 +0000 UTC (2 container statuses recorded)
Jul  4 09:24:31.295: INFO: 	Container ralf ready: false, restart count 0
Jul  4 09:24:31.295: INFO: 	Container tailer ready: false, restart count 0
Jul  4 09:24:31.295: INFO: kube-proxy-b2ncl from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  4 09:24:31.295: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  4 09:24:31.295: INFO: cassandra-98cd9d58-kfrc9 from default started at 2020-07-04 09:22:26 +0000 UTC (1 container statuses recorded)
Jul  4 09:24:31.295: INFO: 	Container cassandra ready: false, restart count 0
Jul  4 09:24:31.295: INFO: homer-6d8b687db8-clrzm from default started at 2020-07-04 09:22:26 +0000 UTC (1 container statuses recorded)
Jul  4 09:24:31.295: INFO: 	Container homer ready: false, restart count 0
Jul  4 09:24:31.295: INFO: homestead-db9dbdb6c-tmjjc from default started at 2020-07-04 09:22:26 +0000 UTC (2 container statuses recorded)
Jul  4 09:24:31.295: INFO: 	Container homestead ready: false, restart count 0
Jul  4 09:24:31.295: INFO: 	Container tailer ready: false, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-cc1e5b7a-bcb5-4c6a-a489-0a0fe33304ec 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-cc1e5b7a-bcb5-4c6a-a489-0a0fe33304ec off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-cc1e5b7a-bcb5-4c6a-a489-0a0fe33304ec
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:29:57.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7025" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:326.700 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":194,"skipped":3202,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:29:57.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Jul  4 09:29:57.804: INFO: Waiting up to 5m0s for pod "pod-8a5fc911-3c1f-44a6-a2fc-3d2e86d8ad27" in namespace "emptydir-1125" to be "success or failure"
Jul  4 09:29:57.819: INFO: Pod "pod-8a5fc911-3c1f-44a6-a2fc-3d2e86d8ad27": Phase="Pending", Reason="", readiness=false. Elapsed: 14.947416ms
Jul  4 09:29:59.823: INFO: Pod "pod-8a5fc911-3c1f-44a6-a2fc-3d2e86d8ad27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018702017s
Jul  4 09:30:01.827: INFO: Pod "pod-8a5fc911-3c1f-44a6-a2fc-3d2e86d8ad27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02323652s
STEP: Saw pod success
Jul  4 09:30:01.827: INFO: Pod "pod-8a5fc911-3c1f-44a6-a2fc-3d2e86d8ad27" satisfied condition "success or failure"
Jul  4 09:30:01.831: INFO: Trying to get logs from node jerma-worker pod pod-8a5fc911-3c1f-44a6-a2fc-3d2e86d8ad27 container test-container: 
STEP: delete the pod
Jul  4 09:30:02.084: INFO: Waiting for pod pod-8a5fc911-3c1f-44a6-a2fc-3d2e86d8ad27 to disappear
Jul  4 09:30:02.282: INFO: Pod pod-8a5fc911-3c1f-44a6-a2fc-3d2e86d8ad27 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:30:02.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1125" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3226,"failed":0}
SSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:30:02.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:30:28.465: INFO: Container started at 2020-07-04 09:30:06 +0000 UTC, pod became ready at 2020-07-04 09:30:26 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:30:28.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9606" for this suite.

• [SLOW TEST:26.180 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3231,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:30:28.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-4d744869-b7c9-43fb-92ae-00b8448c2230
STEP: Creating a pod to test consume secrets
Jul  4 09:30:29.259: INFO: Waiting up to 5m0s for pod "pod-secrets-0e0937f9-ba64-4039-b8cb-d4b124231363" in namespace "secrets-391" to be "success or failure"
Jul  4 09:30:29.275: INFO: Pod "pod-secrets-0e0937f9-ba64-4039-b8cb-d4b124231363": Phase="Pending", Reason="", readiness=false. Elapsed: 15.592612ms
Jul  4 09:30:31.360: INFO: Pod "pod-secrets-0e0937f9-ba64-4039-b8cb-d4b124231363": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101364253s
Jul  4 09:30:33.365: INFO: Pod "pod-secrets-0e0937f9-ba64-4039-b8cb-d4b124231363": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106018221s
Jul  4 09:30:35.369: INFO: Pod "pod-secrets-0e0937f9-ba64-4039-b8cb-d4b124231363": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.109939942s
STEP: Saw pod success
Jul  4 09:30:35.369: INFO: Pod "pod-secrets-0e0937f9-ba64-4039-b8cb-d4b124231363" satisfied condition "success or failure"
Jul  4 09:30:35.371: INFO: Trying to get logs from node jerma-worker pod pod-secrets-0e0937f9-ba64-4039-b8cb-d4b124231363 container secret-volume-test: 
STEP: delete the pod
Jul  4 09:30:35.426: INFO: Waiting for pod pod-secrets-0e0937f9-ba64-4039-b8cb-d4b124231363 to disappear
Jul  4 09:30:35.600: INFO: Pod pod-secrets-0e0937f9-ba64-4039-b8cb-d4b124231363 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:30:35.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-391" for this suite.

• [SLOW TEST:7.169 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3232,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:30:35.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  4 09:30:36.332: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  4 09:30:38.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451836, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451836, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451836, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451836, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:30:40.502: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451836, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451836, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451836, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729451836, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  4 09:30:43.691: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:30:44.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1730" for this suite.
STEP: Destroying namespace "webhook-1730-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.231 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":198,"skipped":3272,"failed":0}
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:30:44.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jul  4 09:30:45.109: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:30:54.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-671" for this suite.

• [SLOW TEST:9.925 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":199,"skipped":3276,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:30:54.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-139
STEP: creating replication controller nodeport-test in namespace services-139
I0704 09:30:55.162556       6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-139, replica count: 2
I0704 09:30:58.213010       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0704 09:31:01.213358       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  4 09:31:01.213: INFO: Creating new exec pod
Jul  4 09:31:06.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-139 execpodj8rz6 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Jul  4 09:31:13.108: INFO: stderr: "I0704 09:31:13.034388    3391 log.go:172] (0xc000119810) (0xc0008a8140) Create stream\nI0704 09:31:13.034428    3391 log.go:172] (0xc000119810) (0xc0008a8140) Stream added, broadcasting: 1\nI0704 09:31:13.036958    3391 log.go:172] (0xc000119810) Reply frame received for 1\nI0704 09:31:13.037000    3391 log.go:172] (0xc000119810) (0xc0008a81e0) Create stream\nI0704 09:31:13.037010    3391 log.go:172] (0xc000119810) (0xc0008a81e0) Stream added, broadcasting: 3\nI0704 09:31:13.038516    3391 log.go:172] (0xc000119810) Reply frame received for 3\nI0704 09:31:13.038559    3391 log.go:172] (0xc000119810) (0xc00085c000) Create stream\nI0704 09:31:13.038569    3391 log.go:172] (0xc000119810) (0xc00085c000) Stream added, broadcasting: 5\nI0704 09:31:13.039351    3391 log.go:172] (0xc000119810) Reply frame received for 5\nI0704 09:31:13.104002    3391 log.go:172] (0xc000119810) Data frame received for 3\nI0704 09:31:13.104027    3391 log.go:172] (0xc0008a81e0) (3) Data frame handling\nI0704 09:31:13.104062    3391 log.go:172] (0xc000119810) Data frame received for 5\nI0704 09:31:13.104088    3391 log.go:172] (0xc00085c000) (5) Data frame handling\nI0704 09:31:13.104109    3391 log.go:172] (0xc00085c000) (5) Data frame sent\nI0704 09:31:13.104120    3391 log.go:172] (0xc000119810) Data frame received for 5\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0704 09:31:13.104126    3391 log.go:172] (0xc00085c000) (5) Data frame handling\nI0704 09:31:13.105482    3391 log.go:172] (0xc000119810) Data frame received for 1\nI0704 09:31:13.105503    3391 log.go:172] (0xc0008a8140) (1) Data frame handling\nI0704 09:31:13.105517    3391 log.go:172] (0xc0008a8140) (1) Data frame sent\nI0704 09:31:13.105535    3391 log.go:172] (0xc000119810) (0xc0008a8140) Stream removed, broadcasting: 1\nI0704 09:31:13.105556    3391 log.go:172] (0xc000119810) Go away received\nI0704 09:31:13.105801    3391 log.go:172] (0xc000119810) (0xc0008a8140) Stream removed, broadcasting: 1\nI0704 09:31:13.105820    3391 log.go:172] (0xc000119810) (0xc0008a81e0) Stream removed, broadcasting: 3\nI0704 09:31:13.105826    3391 log.go:172] (0xc000119810) (0xc00085c000) Stream removed, broadcasting: 5\n"
Jul  4 09:31:13.108: INFO: stdout: ""
Jul  4 09:31:13.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-139 execpodj8rz6 -- /bin/sh -x -c nc -zv -t -w 2 10.108.158.195 80'
Jul  4 09:31:13.303: INFO: stderr: "I0704 09:31:13.229090    3423 log.go:172] (0xc0000f46e0) (0xc0006e1540) Create stream\nI0704 09:31:13.229312    3423 log.go:172] (0xc0000f46e0) (0xc0006e1540) Stream added, broadcasting: 1\nI0704 09:31:13.231666    3423 log.go:172] (0xc0000f46e0) Reply frame received for 1\nI0704 09:31:13.231706    3423 log.go:172] (0xc0000f46e0) (0xc0006bdae0) Create stream\nI0704 09:31:13.231725    3423 log.go:172] (0xc0000f46e0) (0xc0006bdae0) Stream added, broadcasting: 3\nI0704 09:31:13.232476    3423 log.go:172] (0xc0000f46e0) Reply frame received for 3\nI0704 09:31:13.232507    3423 log.go:172] (0xc0000f46e0) (0xc0008f2000) Create stream\nI0704 09:31:13.232526    3423 log.go:172] (0xc0000f46e0) (0xc0008f2000) Stream added, broadcasting: 5\nI0704 09:31:13.233332    3423 log.go:172] (0xc0000f46e0) Reply frame received for 5\nI0704 09:31:13.298623    3423 log.go:172] (0xc0000f46e0) Data frame received for 5\nI0704 09:31:13.298665    3423 log.go:172] (0xc0008f2000) (5) Data frame handling\nI0704 09:31:13.298679    3423 log.go:172] (0xc0008f2000) (5) Data frame sent\nI0704 09:31:13.298686    3423 log.go:172] (0xc0000f46e0) Data frame received for 5\nI0704 09:31:13.298693    3423 log.go:172] (0xc0008f2000) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.158.195 80\nConnection to 10.108.158.195 80 port [tcp/http] succeeded!\nI0704 09:31:13.298710    3423 log.go:172] (0xc0000f46e0) Data frame received for 3\nI0704 09:31:13.298719    3423 log.go:172] (0xc0006bdae0) (3) Data frame handling\nI0704 09:31:13.300022    3423 log.go:172] (0xc0000f46e0) Data frame received for 1\nI0704 09:31:13.300044    3423 log.go:172] (0xc0006e1540) (1) Data frame handling\nI0704 09:31:13.300062    3423 log.go:172] (0xc0006e1540) (1) Data frame sent\nI0704 09:31:13.300083    3423 log.go:172] (0xc0000f46e0) (0xc0006e1540) Stream removed, broadcasting: 1\nI0704 09:31:13.300096    3423 log.go:172] (0xc0000f46e0) Go away received\nI0704 09:31:13.300420    3423 log.go:172] (0xc0000f46e0) (0xc0006e1540) Stream removed, broadcasting: 1\nI0704 09:31:13.300441    3423 log.go:172] (0xc0000f46e0) (0xc0006bdae0) Stream removed, broadcasting: 3\nI0704 09:31:13.300452    3423 log.go:172] (0xc0000f46e0) (0xc0008f2000) Stream removed, broadcasting: 5\n"
Jul  4 09:31:13.303: INFO: stdout: ""
Jul  4 09:31:13.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-139 execpodj8rz6 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32403'
Jul  4 09:31:13.498: INFO: stderr: "I0704 09:31:13.426705    3443 log.go:172] (0xc00053b130) (0xc00059dae0) Create stream\nI0704 09:31:13.426774    3443 log.go:172] (0xc00053b130) (0xc00059dae0) Stream added, broadcasting: 1\nI0704 09:31:13.429614    3443 log.go:172] (0xc00053b130) Reply frame received for 1\nI0704 09:31:13.429663    3443 log.go:172] (0xc00053b130) (0xc00068a000) Create stream\nI0704 09:31:13.429682    3443 log.go:172] (0xc00053b130) (0xc00068a000) Stream added, broadcasting: 3\nI0704 09:31:13.431353    3443 log.go:172] (0xc00053b130) Reply frame received for 3\nI0704 09:31:13.431387    3443 log.go:172] (0xc00053b130) (0xc00059dcc0) Create stream\nI0704 09:31:13.431399    3443 log.go:172] (0xc00053b130) (0xc00059dcc0) Stream added, broadcasting: 5\nI0704 09:31:13.433059    3443 log.go:172] (0xc00053b130) Reply frame received for 5\nI0704 09:31:13.494847    3443 log.go:172] (0xc00053b130) Data frame received for 5\nI0704 09:31:13.494875    3443 log.go:172] (0xc00059dcc0) (5) Data frame handling\nI0704 09:31:13.494888    3443 log.go:172] (0xc00059dcc0) (5) Data frame sent\nI0704 09:31:13.494898    3443 log.go:172] (0xc00053b130) Data frame received for 5\nI0704 09:31:13.494905    3443 log.go:172] (0xc00059dcc0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 32403\nConnection to 172.17.0.10 32403 port [tcp/32403] succeeded!\nI0704 09:31:13.494929    3443 log.go:172] (0xc00053b130) Data frame received for 3\nI0704 09:31:13.494936    3443 log.go:172] (0xc00068a000) (3) Data frame handling\nI0704 09:31:13.495869    3443 log.go:172] (0xc00053b130) Data frame received for 1\nI0704 09:31:13.495885    3443 log.go:172] (0xc00059dae0) (1) Data frame handling\nI0704 09:31:13.495898    3443 log.go:172] (0xc00059dae0) (1) Data frame sent\nI0704 09:31:13.495913    3443 log.go:172] (0xc00053b130) (0xc00059dae0) Stream removed, broadcasting: 1\nI0704 09:31:13.495982    3443 log.go:172] (0xc00053b130) Go away received\nI0704 09:31:13.496188    3443 log.go:172] (0xc00053b130) (0xc00059dae0) Stream removed, broadcasting: 1\nI0704 09:31:13.496200    3443 log.go:172] (0xc00053b130) (0xc00068a000) Stream removed, broadcasting: 3\nI0704 09:31:13.496209    3443 log.go:172] (0xc00053b130) (0xc00059dcc0) Stream removed, broadcasting: 5\n"
Jul  4 09:31:13.498: INFO: stdout: ""
Jul  4 09:31:13.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-139 execpodj8rz6 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32403'
Jul  4 09:31:14.584: INFO: stderr: "I0704 09:31:14.488980    3464 log.go:172] (0xc000106b00) (0xc000647ae0) Create stream\nI0704 09:31:14.489047    3464 log.go:172] (0xc000106b00) (0xc000647ae0) Stream added, broadcasting: 1\nI0704 09:31:14.491957    3464 log.go:172] (0xc000106b00) Reply frame received for 1\nI0704 09:31:14.491991    3464 log.go:172] (0xc000106b00) (0xc000926000) Create stream\nI0704 09:31:14.492010    3464 log.go:172] (0xc000106b00) (0xc000926000) Stream added, broadcasting: 3\nI0704 09:31:14.492929    3464 log.go:172] (0xc000106b00) Reply frame received for 3\nI0704 09:31:14.492986    3464 log.go:172] (0xc000106b00) (0xc0001c0000) Create stream\nI0704 09:31:14.493010    3464 log.go:172] (0xc000106b00) (0xc0001c0000) Stream added, broadcasting: 5\nI0704 09:31:14.494028    3464 log.go:172] (0xc000106b00) Reply frame received for 5\nI0704 09:31:14.578603    3464 log.go:172] (0xc000106b00) Data frame received for 5\nI0704 09:31:14.578653    3464 log.go:172] (0xc0001c0000) (5) Data frame handling\nI0704 09:31:14.578684    3464 log.go:172] (0xc0001c0000) (5) Data frame sent\nI0704 09:31:14.578699    3464 log.go:172] (0xc000106b00) Data frame received for 5\nI0704 09:31:14.578707    3464 log.go:172] (0xc0001c0000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 32403\nConnection to 172.17.0.8 32403 port [tcp/32403] succeeded!\nI0704 09:31:14.578738    3464 log.go:172] (0xc0001c0000) (5) Data frame sent\nI0704 09:31:14.578775    3464 log.go:172] (0xc000106b00) Data frame received for 5\nI0704 09:31:14.578794    3464 log.go:172] (0xc0001c0000) (5) Data frame handling\nI0704 09:31:14.579024    3464 log.go:172] (0xc000106b00) Data frame received for 3\nI0704 09:31:14.579045    3464 log.go:172] (0xc000926000) (3) Data frame handling\nI0704 09:31:14.580115    3464 log.go:172] (0xc000106b00) Data frame received for 1\nI0704 09:31:14.580133    3464 log.go:172] (0xc000647ae0) (1) Data frame handling\nI0704 09:31:14.580145    3464 log.go:172] (0xc000647ae0) (1) Data frame sent\nI0704 09:31:14.580160    3464 log.go:172] (0xc000106b00) (0xc000647ae0) Stream removed, broadcasting: 1\nI0704 09:31:14.580250    3464 log.go:172] (0xc000106b00) Go away received\nI0704 09:31:14.580422    3464 log.go:172] (0xc000106b00) (0xc000647ae0) Stream removed, broadcasting: 1\nI0704 09:31:14.580434    3464 log.go:172] (0xc000106b00) (0xc000926000) Stream removed, broadcasting: 3\nI0704 09:31:14.580441    3464 log.go:172] (0xc000106b00) (0xc0001c0000) Stream removed, broadcasting: 5\n"
Jul  4 09:31:14.584: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:31:14.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-139" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:19.795 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":200,"skipped":3285,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:31:14.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Jul  4 09:31:14.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9045'
Jul  4 09:31:15.888: INFO: stderr: ""
Jul  4 09:31:15.888: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  4 09:31:15.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9045'
Jul  4 09:31:16.070: INFO: stderr: ""
Jul  4 09:31:16.070: INFO: stdout: "update-demo-nautilus-7457h update-demo-nautilus-zpgsj "
Jul  4 09:31:16.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7457h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9045'
Jul  4 09:31:16.264: INFO: stderr: ""
Jul  4 09:31:16.264: INFO: stdout: ""
Jul  4 09:31:16.264: INFO: update-demo-nautilus-7457h is created but not running
Jul  4 09:31:21.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9045'
Jul  4 09:31:21.577: INFO: stderr: ""
Jul  4 09:31:21.577: INFO: stdout: "update-demo-nautilus-7457h update-demo-nautilus-zpgsj "
Jul  4 09:31:21.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7457h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9045'
Jul  4 09:31:21.724: INFO: stderr: ""
Jul  4 09:31:21.724: INFO: stdout: ""
Jul  4 09:31:21.724: INFO: update-demo-nautilus-7457h is created but not running
Jul  4 09:31:26.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9045'
Jul  4 09:31:26.821: INFO: stderr: ""
Jul  4 09:31:26.821: INFO: stdout: "update-demo-nautilus-7457h update-demo-nautilus-zpgsj "
Jul  4 09:31:26.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7457h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9045'
Jul  4 09:31:26.912: INFO: stderr: ""
Jul  4 09:31:26.912: INFO: stdout: "true"
Jul  4 09:31:26.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7457h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9045'
Jul  4 09:31:27.005: INFO: stderr: ""
Jul  4 09:31:27.005: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  4 09:31:27.005: INFO: validating pod update-demo-nautilus-7457h
Jul  4 09:31:27.009: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  4 09:31:27.009: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  4 09:31:27.009: INFO: update-demo-nautilus-7457h is verified up and running
Jul  4 09:31:27.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zpgsj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9045'
Jul  4 09:31:27.109: INFO: stderr: ""
Jul  4 09:31:27.110: INFO: stdout: "true"
Jul  4 09:31:27.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zpgsj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9045'
Jul  4 09:31:27.201: INFO: stderr: ""
Jul  4 09:31:27.201: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  4 09:31:27.201: INFO: validating pod update-demo-nautilus-zpgsj
Jul  4 09:31:27.205: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  4 09:31:27.205: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  4 09:31:27.205: INFO: update-demo-nautilus-zpgsj is verified up and running
STEP: scaling down the replication controller
Jul  4 09:31:27.208: INFO: scanned /root for discovery docs: 
Jul  4 09:31:27.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9045'
Jul  4 09:31:28.342: INFO: stderr: ""
Jul  4 09:31:28.342: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  4 09:31:28.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9045'
Jul  4 09:31:28.446: INFO: stderr: ""
Jul  4 09:31:28.446: INFO: stdout: "update-demo-nautilus-7457h update-demo-nautilus-zpgsj "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul  4 09:31:33.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9045'
Jul  4 09:31:33.548: INFO: stderr: ""
Jul  4 09:31:33.548: INFO: stdout: "update-demo-nautilus-7457h update-demo-nautilus-zpgsj "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul  4 09:31:38.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9045'
Jul  4 09:31:38.648: INFO: stderr: ""
Jul  4 09:31:38.648: INFO: stdout: "update-demo-nautilus-zpgsj "
Jul  4 09:31:38.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zpgsj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9045'
Jul  4 09:31:38.740: INFO: stderr: ""
Jul  4 09:31:38.740: INFO: stdout: "true"
Jul  4 09:31:38.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zpgsj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9045'
Jul  4 09:31:39.014: INFO: stderr: ""
Jul  4 09:31:39.014: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  4 09:31:39.014: INFO: validating pod update-demo-nautilus-zpgsj
Jul  4 09:31:39.056: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  4 09:31:39.056: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  4 09:31:39.056: INFO: update-demo-nautilus-zpgsj is verified up and running
STEP: scaling up the replication controller
Jul  4 09:31:39.060: INFO: scanned /root for discovery docs: 
Jul  4 09:31:39.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9045'
Jul  4 09:31:40.263: INFO: stderr: ""
Jul  4 09:31:40.263: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  4 09:31:40.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9045'
Jul  4 09:31:40.356: INFO: stderr: ""
Jul  4 09:31:40.356: INFO: stdout: "update-demo-nautilus-866jd update-demo-nautilus-zpgsj "
Jul  4 09:31:40.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-866jd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9045'
Jul  4 09:31:40.451: INFO: stderr: ""
Jul  4 09:31:40.451: INFO: stdout: ""
Jul  4 09:31:40.451: INFO: update-demo-nautilus-866jd is created but not running
Jul  4 09:31:45.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9045'
Jul  4 09:31:45.560: INFO: stderr: ""
Jul  4 09:31:45.560: INFO: stdout: "update-demo-nautilus-866jd update-demo-nautilus-zpgsj "
Jul  4 09:31:45.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-866jd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9045'
Jul  4 09:31:45.649: INFO: stderr: ""
Jul  4 09:31:45.649: INFO: stdout: "true"
Jul  4 09:31:45.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-866jd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9045'
Jul  4 09:31:45.734: INFO: stderr: ""
Jul  4 09:31:45.734: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  4 09:31:45.734: INFO: validating pod update-demo-nautilus-866jd
Jul  4 09:31:45.738: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  4 09:31:45.738: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  4 09:31:45.738: INFO: update-demo-nautilus-866jd is verified up and running
Jul  4 09:31:45.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zpgsj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9045'
Jul  4 09:31:45.824: INFO: stderr: ""
Jul  4 09:31:45.824: INFO: stdout: "true"
Jul  4 09:31:45.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zpgsj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9045'
Jul  4 09:31:45.914: INFO: stderr: ""
Jul  4 09:31:45.914: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  4 09:31:45.914: INFO: validating pod update-demo-nautilus-zpgsj
Jul  4 09:31:45.936: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  4 09:31:45.936: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  4 09:31:45.936: INFO: update-demo-nautilus-zpgsj is verified up and running
STEP: using delete to clean up resources
Jul  4 09:31:45.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9045'
Jul  4 09:31:46.106: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  4 09:31:46.106: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul  4 09:31:46.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9045'
Jul  4 09:31:46.204: INFO: stderr: "No resources found in kubectl-9045 namespace.\n"
Jul  4 09:31:46.204: INFO: stdout: ""
Jul  4 09:31:46.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9045 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  4 09:31:46.288: INFO: stderr: ""
Jul  4 09:31:46.288: INFO: stdout: "update-demo-nautilus-866jd\nupdate-demo-nautilus-zpgsj\n"
Jul  4 09:31:46.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9045'
Jul  4 09:31:46.907: INFO: stderr: "No resources found in kubectl-9045 namespace.\n"
Jul  4 09:31:46.908: INFO: stdout: ""
Jul  4 09:31:46.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9045 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  4 09:31:47.009: INFO: stderr: ""
Jul  4 09:31:47.009: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:31:47.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9045" for this suite.

• [SLOW TEST:32.423 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":201,"skipped":3315,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:31:47.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-rgwc
STEP: Creating a pod to test atomic-volume-subpath
Jul  4 09:31:47.582: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-rgwc" in namespace "subpath-4049" to be "success or failure"
Jul  4 09:31:47.925: INFO: Pod "pod-subpath-test-secret-rgwc": Phase="Pending", Reason="", readiness=false. Elapsed: 343.2281ms
Jul  4 09:31:49.930: INFO: Pod "pod-subpath-test-secret-rgwc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.347355794s
Jul  4 09:31:51.933: INFO: Pod "pod-subpath-test-secret-rgwc": Phase="Running", Reason="", readiness=true. Elapsed: 4.350992175s
Jul  4 09:31:53.937: INFO: Pod "pod-subpath-test-secret-rgwc": Phase="Running", Reason="", readiness=true. Elapsed: 6.35492378s
Jul  4 09:31:55.941: INFO: Pod "pod-subpath-test-secret-rgwc": Phase="Running", Reason="", readiness=true. Elapsed: 8.359179236s
Jul  4 09:31:57.946: INFO: Pod "pod-subpath-test-secret-rgwc": Phase="Running", Reason="", readiness=true. Elapsed: 10.363571216s
Jul  4 09:31:59.950: INFO: Pod "pod-subpath-test-secret-rgwc": Phase="Running", Reason="", readiness=true. Elapsed: 12.368043294s
Jul  4 09:32:01.955: INFO: Pod "pod-subpath-test-secret-rgwc": Phase="Running", Reason="", readiness=true. Elapsed: 14.372452726s
Jul  4 09:32:03.959: INFO: Pod "pod-subpath-test-secret-rgwc": Phase="Running", Reason="", readiness=true. Elapsed: 16.377103486s
Jul  4 09:32:05.964: INFO: Pod "pod-subpath-test-secret-rgwc": Phase="Running", Reason="", readiness=true. Elapsed: 18.381898417s
Jul  4 09:32:07.972: INFO: Pod "pod-subpath-test-secret-rgwc": Phase="Running", Reason="", readiness=true. Elapsed: 20.38967141s
Jul  4 09:32:09.976: INFO: Pod "pod-subpath-test-secret-rgwc": Phase="Running", Reason="", readiness=true. Elapsed: 22.393647292s
Jul  4 09:32:11.980: INFO: Pod "pod-subpath-test-secret-rgwc": Phase="Running", Reason="", readiness=true. Elapsed: 24.397378326s
Jul  4 09:32:13.985: INFO: Pod "pod-subpath-test-secret-rgwc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.403025184s
STEP: Saw pod success
Jul  4 09:32:13.985: INFO: Pod "pod-subpath-test-secret-rgwc" satisfied condition "success or failure"
Jul  4 09:32:13.989: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-rgwc container test-container-subpath-secret-rgwc: 
STEP: delete the pod
Jul  4 09:32:14.057: INFO: Waiting for pod pod-subpath-test-secret-rgwc to disappear
Jul  4 09:32:14.063: INFO: Pod pod-subpath-test-secret-rgwc no longer exists
STEP: Deleting pod pod-subpath-test-secret-rgwc
Jul  4 09:32:14.063: INFO: Deleting pod "pod-subpath-test-secret-rgwc" in namespace "subpath-4049"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:32:14.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4049" for this suite.

• [SLOW TEST:27.067 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":202,"skipped":3328,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:32:14.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-e8086ef2-94a7-4e93-b25d-4c790349e748
STEP: Creating a pod to test consume secrets
Jul  4 09:32:14.389: INFO: Waiting up to 5m0s for pod "pod-secrets-9ea112d4-341e-4481-8f41-262a53e55605" in namespace "secrets-302" to be "success or failure"
Jul  4 09:32:14.396: INFO: Pod "pod-secrets-9ea112d4-341e-4481-8f41-262a53e55605": Phase="Pending", Reason="", readiness=false. Elapsed: 7.053856ms
Jul  4 09:32:16.679: INFO: Pod "pod-secrets-9ea112d4-341e-4481-8f41-262a53e55605": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290078194s
Jul  4 09:32:18.683: INFO: Pod "pod-secrets-9ea112d4-341e-4481-8f41-262a53e55605": Phase="Running", Reason="", readiness=true. Elapsed: 4.293916786s
Jul  4 09:32:20.687: INFO: Pod "pod-secrets-9ea112d4-341e-4481-8f41-262a53e55605": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.298113973s
STEP: Saw pod success
Jul  4 09:32:20.687: INFO: Pod "pod-secrets-9ea112d4-341e-4481-8f41-262a53e55605" satisfied condition "success or failure"
Jul  4 09:32:20.691: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-9ea112d4-341e-4481-8f41-262a53e55605 container secret-volume-test: 
STEP: delete the pod
Jul  4 09:32:20.747: INFO: Waiting for pod pod-secrets-9ea112d4-341e-4481-8f41-262a53e55605 to disappear
Jul  4 09:32:20.750: INFO: Pod pod-secrets-9ea112d4-341e-4481-8f41-262a53e55605 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:32:20.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-302" for this suite.
STEP: Destroying namespace "secret-namespace-5057" for this suite.

• [SLOW TEST:6.717 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3359,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:32:20.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:32:27.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8484" for this suite.

• [SLOW TEST:7.144 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":204,"skipped":3372,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:32:27.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Jul  4 09:32:28.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5561'
Jul  4 09:32:28.308: INFO: stderr: ""
Jul  4 09:32:28.308: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  4 09:32:28.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5561'
Jul  4 09:32:28.459: INFO: stderr: ""
Jul  4 09:32:28.459: INFO: stdout: "update-demo-nautilus-bhr5p update-demo-nautilus-x2gmd "
Jul  4 09:32:28.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bhr5p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5561'
Jul  4 09:32:28.619: INFO: stderr: ""
Jul  4 09:32:28.619: INFO: stdout: ""
Jul  4 09:32:28.619: INFO: update-demo-nautilus-bhr5p is created but not running
Jul  4 09:32:33.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5561'
Jul  4 09:32:33.745: INFO: stderr: ""
Jul  4 09:32:33.745: INFO: stdout: "update-demo-nautilus-bhr5p update-demo-nautilus-x2gmd "
Jul  4 09:32:33.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bhr5p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5561'
Jul  4 09:32:33.831: INFO: stderr: ""
Jul  4 09:32:33.831: INFO: stdout: ""
Jul  4 09:32:33.831: INFO: update-demo-nautilus-bhr5p is created but not running
Jul  4 09:32:38.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5561'
Jul  4 09:32:38.927: INFO: stderr: ""
Jul  4 09:32:38.927: INFO: stdout: "update-demo-nautilus-bhr5p update-demo-nautilus-x2gmd "
Jul  4 09:32:38.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bhr5p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5561'
Jul  4 09:32:39.194: INFO: stderr: ""
Jul  4 09:32:39.194: INFO: stdout: "true"
Jul  4 09:32:39.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bhr5p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5561'
Jul  4 09:32:39.286: INFO: stderr: ""
Jul  4 09:32:39.286: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  4 09:32:39.286: INFO: validating pod update-demo-nautilus-bhr5p
Jul  4 09:32:39.290: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  4 09:32:39.290: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  4 09:32:39.291: INFO: update-demo-nautilus-bhr5p is verified up and running
Jul  4 09:32:39.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x2gmd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5561'
Jul  4 09:32:39.385: INFO: stderr: ""
Jul  4 09:32:39.385: INFO: stdout: "true"
Jul  4 09:32:39.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x2gmd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5561'
Jul  4 09:32:39.470: INFO: stderr: ""
Jul  4 09:32:39.470: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  4 09:32:39.470: INFO: validating pod update-demo-nautilus-x2gmd
Jul  4 09:32:39.473: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  4 09:32:39.473: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  4 09:32:39.473: INFO: update-demo-nautilus-x2gmd is verified up and running
STEP: rolling-update to new replication controller
Jul  4 09:32:39.475: INFO: scanned /root for discovery docs: 
Jul  4 09:32:39.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5561'
Jul  4 09:33:15.567: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jul  4 09:33:15.567: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  4 09:33:15.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5561'
Jul  4 09:33:15.736: INFO: stderr: ""
Jul  4 09:33:15.736: INFO: stdout: "update-demo-kitten-f5dbl update-demo-kitten-p8t6m "
Jul  4 09:33:15.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-f5dbl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5561'
Jul  4 09:33:16.117: INFO: stderr: ""
Jul  4 09:33:16.117: INFO: stdout: "true"
Jul  4 09:33:16.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-f5dbl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5561'
Jul  4 09:33:16.248: INFO: stderr: ""
Jul  4 09:33:16.248: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul  4 09:33:16.248: INFO: validating pod update-demo-kitten-f5dbl
Jul  4 09:33:16.251: INFO: got data: {
  "image": "kitten.jpg"
}

Jul  4 09:33:16.251: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul  4 09:33:16.251: INFO: update-demo-kitten-f5dbl is verified up and running
Jul  4 09:33:16.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-p8t6m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5561'
Jul  4 09:33:16.343: INFO: stderr: ""
Jul  4 09:33:16.343: INFO: stdout: "true"
Jul  4 09:33:16.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-p8t6m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5561'
Jul  4 09:33:16.426: INFO: stderr: ""
Jul  4 09:33:16.426: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul  4 09:33:16.426: INFO: validating pod update-demo-kitten-p8t6m
Jul  4 09:33:16.429: INFO: got data: {
  "image": "kitten.jpg"
}

Jul  4 09:33:16.430: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul  4 09:33:16.430: INFO: update-demo-kitten-p8t6m is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:33:16.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5561" for this suite.

• [SLOW TEST:48.492 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":205,"skipped":3396,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:33:16.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  4 09:33:16.550: INFO: Waiting up to 5m0s for pod "downwardapi-volume-105eb7ad-94f7-40fb-bd9d-d80e38c8452f" in namespace "downward-api-3126" to be "success or failure"
Jul  4 09:33:16.758: INFO: Pod "downwardapi-volume-105eb7ad-94f7-40fb-bd9d-d80e38c8452f": Phase="Pending", Reason="", readiness=false. Elapsed: 207.805967ms
Jul  4 09:33:18.770: INFO: Pod "downwardapi-volume-105eb7ad-94f7-40fb-bd9d-d80e38c8452f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219916895s
Jul  4 09:33:20.775: INFO: Pod "downwardapi-volume-105eb7ad-94f7-40fb-bd9d-d80e38c8452f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224390911s
Jul  4 09:33:23.897: INFO: Pod "downwardapi-volume-105eb7ad-94f7-40fb-bd9d-d80e38c8452f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.346358158s
Jul  4 09:33:26.381: INFO: Pod "downwardapi-volume-105eb7ad-94f7-40fb-bd9d-d80e38c8452f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.831063767s
Jul  4 09:33:29.027: INFO: Pod "downwardapi-volume-105eb7ad-94f7-40fb-bd9d-d80e38c8452f": Phase="Running", Reason="", readiness=true. Elapsed: 12.477173487s
Jul  4 09:33:31.032: INFO: Pod "downwardapi-volume-105eb7ad-94f7-40fb-bd9d-d80e38c8452f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.481536113s
STEP: Saw pod success
Jul  4 09:33:31.032: INFO: Pod "downwardapi-volume-105eb7ad-94f7-40fb-bd9d-d80e38c8452f" satisfied condition "success or failure"
Jul  4 09:33:31.035: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-105eb7ad-94f7-40fb-bd9d-d80e38c8452f container client-container: 
STEP: delete the pod
Jul  4 09:33:31.102: INFO: Waiting for pod downwardapi-volume-105eb7ad-94f7-40fb-bd9d-d80e38c8452f to disappear
Jul  4 09:33:31.320: INFO: Pod downwardapi-volume-105eb7ad-94f7-40fb-bd9d-d80e38c8452f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:33:31.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3126" for this suite.

• [SLOW TEST:15.083 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3417,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:33:31.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-91b749bb-cf6c-4cda-88ed-19aaaa4966b7
STEP: Creating a pod to test consume configMaps
Jul  4 09:33:32.106: INFO: Waiting up to 5m0s for pod "pod-configmaps-aac85bf1-f073-492d-8345-0867b9b6128c" in namespace "configmap-584" to be "success or failure"
Jul  4 09:33:32.189: INFO: Pod "pod-configmaps-aac85bf1-f073-492d-8345-0867b9b6128c": Phase="Pending", Reason="", readiness=false. Elapsed: 83.356498ms
Jul  4 09:33:34.202: INFO: Pod "pod-configmaps-aac85bf1-f073-492d-8345-0867b9b6128c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096091063s
Jul  4 09:33:36.230: INFO: Pod "pod-configmaps-aac85bf1-f073-492d-8345-0867b9b6128c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124068323s
Jul  4 09:33:38.297: INFO: Pod "pod-configmaps-aac85bf1-f073-492d-8345-0867b9b6128c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.19081451s
Jul  4 09:33:40.300: INFO: Pod "pod-configmaps-aac85bf1-f073-492d-8345-0867b9b6128c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.194143098s
STEP: Saw pod success
Jul  4 09:33:40.300: INFO: Pod "pod-configmaps-aac85bf1-f073-492d-8345-0867b9b6128c" satisfied condition "success or failure"
Jul  4 09:33:40.302: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-aac85bf1-f073-492d-8345-0867b9b6128c container configmap-volume-test: 
STEP: delete the pod
Jul  4 09:33:40.391: INFO: Waiting for pod pod-configmaps-aac85bf1-f073-492d-8345-0867b9b6128c to disappear
Jul  4 09:33:40.405: INFO: Pod pod-configmaps-aac85bf1-f073-492d-8345-0867b9b6128c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:33:40.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-584" for this suite.

• [SLOW TEST:8.898 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3433,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:33:40.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  4 09:33:40.519: INFO: Waiting up to 5m0s for pod "pod-23bc907f-1342-4a3f-843f-43aec2039487" in namespace "emptydir-8457" to be "success or failure"
Jul  4 09:33:40.526: INFO: Pod "pod-23bc907f-1342-4a3f-843f-43aec2039487": Phase="Pending", Reason="", readiness=false. Elapsed: 7.116744ms
Jul  4 09:33:42.776: INFO: Pod "pod-23bc907f-1342-4a3f-843f-43aec2039487": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257516498s
Jul  4 09:33:44.778: INFO: Pod "pod-23bc907f-1342-4a3f-843f-43aec2039487": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259851261s
Jul  4 09:33:47.067: INFO: Pod "pod-23bc907f-1342-4a3f-843f-43aec2039487": Phase="Pending", Reason="", readiness=false. Elapsed: 6.548828936s
Jul  4 09:33:49.071: INFO: Pod "pod-23bc907f-1342-4a3f-843f-43aec2039487": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552303625s
Jul  4 09:33:51.075: INFO: Pod "pod-23bc907f-1342-4a3f-843f-43aec2039487": Phase="Pending", Reason="", readiness=false. Elapsed: 10.556736232s
Jul  4 09:33:55.486: INFO: Pod "pod-23bc907f-1342-4a3f-843f-43aec2039487": Phase="Pending", Reason="", readiness=false. Elapsed: 14.967160856s
Jul  4 09:33:57.490: INFO: Pod "pod-23bc907f-1342-4a3f-843f-43aec2039487": Phase="Pending", Reason="", readiness=false. Elapsed: 16.971364115s
Jul  4 09:33:59.494: INFO: Pod "pod-23bc907f-1342-4a3f-843f-43aec2039487": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.975324776s
STEP: Saw pod success
Jul  4 09:33:59.494: INFO: Pod "pod-23bc907f-1342-4a3f-843f-43aec2039487" satisfied condition "success or failure"
Jul  4 09:33:59.496: INFO: Trying to get logs from node jerma-worker2 pod pod-23bc907f-1342-4a3f-843f-43aec2039487 container test-container: 
STEP: delete the pod
Jul  4 09:33:59.581: INFO: Waiting for pod pod-23bc907f-1342-4a3f-843f-43aec2039487 to disappear
Jul  4 09:33:59.584: INFO: Pod pod-23bc907f-1342-4a3f-843f-43aec2039487 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:33:59.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8457" for this suite.

• [SLOW TEST:19.173 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3434,"failed":0}
SSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:33:59.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:33:59.937: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jul  4 09:34:04.940: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul  4 09:34:04.940: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jul  4 09:34:06.944: INFO: Creating deployment "test-rollover-deployment"
Jul  4 09:34:07.093: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jul  4 09:34:09.286: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jul  4 09:34:09.344: INFO: Ensure that both replica sets have 1 created replica
Jul  4 09:34:09.350: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jul  4 09:34:09.356: INFO: Updating deployment test-rollover-deployment
Jul  4 09:34:09.356: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jul  4 09:34:11.542: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jul  4 09:34:11.546: INFO: Make sure deployment "test-rollover-deployment" is complete
Jul  4 09:34:11.551: INFO: all replica sets need to contain the pod-template-hash label
Jul  4 09:34:11.551: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452049, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:34:13.558: INFO: all replica sets need to contain the pod-template-hash label
Jul  4 09:34:13.558: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452049, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:34:15.558: INFO: all replica sets need to contain the pod-template-hash label
Jul  4 09:34:15.558: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452053, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:34:17.556: INFO: all replica sets need to contain the pod-template-hash label
Jul  4 09:34:17.556: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452053, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:34:19.559: INFO: all replica sets need to contain the pod-template-hash label
Jul  4 09:34:19.559: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452053, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:34:21.558: INFO: all replica sets need to contain the pod-template-hash label
Jul  4 09:34:21.558: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452053, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:34:23.556: INFO: all replica sets need to contain the pod-template-hash label
Jul  4 09:34:23.556: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452053, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452047, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:34:27.034: INFO: 
Jul  4 09:34:27.035: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jul  4 09:34:28.366: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-3678 /apis/apps/v1/namespaces/deployment-3678/deployments/test-rollover-deployment ef9dd7ac-15cb-44ac-ba97-81750eda9cf7 27465 2 2020-07-04 09:34:06 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004eaef38  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-04 09:34:07 +0000 UTC,LastTransitionTime:2020-07-04 09:34:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-07-04 09:34:25 +0000 UTC,LastTransitionTime:2020-07-04 09:34:07 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jul  4 09:34:28.369: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-3678 /apis/apps/v1/namespaces/deployment-3678/replicasets/test-rollover-deployment-574d6dfbff 6be9f4f4-73ee-4123-a6fb-e6fd4a84ba9a 27451 2 2020-07-04 09:34:09 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment ef9dd7ac-15cb-44ac-ba97-81750eda9cf7 0xc004eaf3c7 0xc004eaf3c8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004eaf438  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jul  4 09:34:28.369: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jul  4 09:34:28.370: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-3678 /apis/apps/v1/namespaces/deployment-3678/replicasets/test-rollover-controller 3a30faa7-b435-486f-b57b-1269c767026f 27462 2 2020-07-04 09:33:59 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment ef9dd7ac-15cb-44ac-ba97-81750eda9cf7 0xc004eaf2f7 0xc004eaf2f8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004eaf358  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul  4 09:34:28.370: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-3678 /apis/apps/v1/namespaces/deployment-3678/replicasets/test-rollover-deployment-f6c94f66c e3fc01a7-53d2-4162-9c60-5a9f7610003b 27402 2 2020-07-04 09:34:07 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment ef9dd7ac-15cb-44ac-ba97-81750eda9cf7 0xc004eaf4a0 0xc004eaf4a1}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004eaf518  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul  4 09:34:28.373: INFO: Pod "test-rollover-deployment-574d6dfbff-x8zg9" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-x8zg9 test-rollover-deployment-574d6dfbff- deployment-3678 /api/v1/namespaces/deployment-3678/pods/test-rollover-deployment-574d6dfbff-x8zg9 8778c116-cfd9-4163-a399-b676f7429883 27419 0 2020-07-04 09:34:09 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 6be9f4f4-73ee-4123-a6fb-e6fd4a84ba9a 0xc004eafa47 0xc004eafa48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72gqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72gqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72gqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:34:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:34:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:34:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:34:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.152,StartTime:2020-07-04 09:34:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-04 09:34:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://36ae680deea7d64260888ebd6cb61aca2d83c503ec2e9986cd42f146d8fc8326,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.152,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:34:28.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3678" for this suite.

• [SLOW TEST:28.789 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":209,"skipped":3437,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:34:28.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:34:30.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-7840" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":210,"skipped":3445,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:34:32.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  4 09:34:33.487: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8cf77639-c511-40c2-910e-a475eb931db1" in namespace "downward-api-9863" to be "success or failure"
Jul  4 09:34:33.688: INFO: Pod "downwardapi-volume-8cf77639-c511-40c2-910e-a475eb931db1": Phase="Pending", Reason="", readiness=false. Elapsed: 201.200714ms
Jul  4 09:34:37.389: INFO: Pod "downwardapi-volume-8cf77639-c511-40c2-910e-a475eb931db1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.901507651s
Jul  4 09:34:39.866: INFO: Pod "downwardapi-volume-8cf77639-c511-40c2-910e-a475eb931db1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.378777364s
Jul  4 09:34:43.215: INFO: Pod "downwardapi-volume-8cf77639-c511-40c2-910e-a475eb931db1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.728053621s
Jul  4 09:34:45.257: INFO: Pod "downwardapi-volume-8cf77639-c511-40c2-910e-a475eb931db1": Phase="Pending", Reason="", readiness=false. Elapsed: 11.769509817s
Jul  4 09:34:47.259: INFO: Pod "downwardapi-volume-8cf77639-c511-40c2-910e-a475eb931db1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.772366821s
Jul  4 09:34:49.299: INFO: Pod "downwardapi-volume-8cf77639-c511-40c2-910e-a475eb931db1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.811949877s
STEP: Saw pod success
Jul  4 09:34:49.299: INFO: Pod "downwardapi-volume-8cf77639-c511-40c2-910e-a475eb931db1" satisfied condition "success or failure"
Jul  4 09:34:49.301: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8cf77639-c511-40c2-910e-a475eb931db1 container client-container: 
STEP: delete the pod
Jul  4 09:34:49.856: INFO: Waiting for pod downwardapi-volume-8cf77639-c511-40c2-910e-a475eb931db1 to disappear
Jul  4 09:34:49.991: INFO: Pod downwardapi-volume-8cf77639-c511-40c2-910e-a475eb931db1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:34:49.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9863" for this suite.

• [SLOW TEST:17.914 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3463,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:34:49.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:35:14.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3689" for this suite.

• [SLOW TEST:24.800 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":212,"skipped":3481,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:35:14.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul  4 09:35:18.655: INFO: Waiting up to 5m0s for pod "pod-4340f773-8d99-47e5-8419-c0e8ce20aa6b" in namespace "emptydir-7401" to be "success or failure"
Jul  4 09:35:18.906: INFO: Pod "pod-4340f773-8d99-47e5-8419-c0e8ce20aa6b": Phase="Pending", Reason="", readiness=false. Elapsed: 250.858181ms
Jul  4 09:35:21.563: INFO: Pod "pod-4340f773-8d99-47e5-8419-c0e8ce20aa6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.907948926s
Jul  4 09:35:23.567: INFO: Pod "pod-4340f773-8d99-47e5-8419-c0e8ce20aa6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.91165685s
Jul  4 09:35:25.571: INFO: Pod "pod-4340f773-8d99-47e5-8419-c0e8ce20aa6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.915735228s
STEP: Saw pod success
Jul  4 09:35:25.571: INFO: Pod "pod-4340f773-8d99-47e5-8419-c0e8ce20aa6b" satisfied condition "success or failure"
Jul  4 09:35:25.574: INFO: Trying to get logs from node jerma-worker pod pod-4340f773-8d99-47e5-8419-c0e8ce20aa6b container test-container: 
STEP: delete the pod
Jul  4 09:35:25.911: INFO: Waiting for pod pod-4340f773-8d99-47e5-8419-c0e8ce20aa6b to disappear
Jul  4 09:35:25.939: INFO: Pod pod-4340f773-8d99-47e5-8419-c0e8ce20aa6b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:35:25.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7401" for this suite.

• [SLOW TEST:11.147 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3491,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:35:25.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:35:27.023: INFO: Create a RollingUpdate DaemonSet
Jul  4 09:35:27.025: INFO: Check that daemon pods launch on every node of the cluster
Jul  4 09:35:27.053: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:35:27.083: INFO: Number of nodes with available pods: 0
Jul  4 09:35:27.083: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:35:28.086: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:35:28.089: INFO: Number of nodes with available pods: 0
Jul  4 09:35:28.089: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:35:29.437: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:35:29.439: INFO: Number of nodes with available pods: 0
Jul  4 09:35:29.439: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:35:30.087: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:35:30.090: INFO: Number of nodes with available pods: 0
Jul  4 09:35:30.090: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:35:31.150: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:35:31.329: INFO: Number of nodes with available pods: 0
Jul  4 09:35:31.329: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:35:32.094: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:35:32.097: INFO: Number of nodes with available pods: 0
Jul  4 09:35:32.097: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:35:33.086: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:35:33.088: INFO: Number of nodes with available pods: 2
Jul  4 09:35:33.088: INFO: Number of running nodes: 2, number of available pods: 2
Jul  4 09:35:33.088: INFO: Update the DaemonSet to trigger a rollout
Jul  4 09:35:33.091: INFO: Updating DaemonSet daemon-set
Jul  4 09:35:40.175: INFO: Roll back the DaemonSet before rollout is complete
Jul  4 09:35:40.180: INFO: Updating DaemonSet daemon-set
Jul  4 09:35:40.180: INFO: Make sure DaemonSet rollback is complete
Jul  4 09:35:40.217: INFO: Wrong image for pod: daemon-set-gtmhx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jul  4 09:35:40.217: INFO: Pod daemon-set-gtmhx is not available
Jul  4 09:35:40.412: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:35:41.450: INFO: Wrong image for pod: daemon-set-gtmhx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jul  4 09:35:41.450: INFO: Pod daemon-set-gtmhx is not available
Jul  4 09:35:41.452: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:35:42.505: INFO: Wrong image for pod: daemon-set-gtmhx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jul  4 09:35:42.505: INFO: Pod daemon-set-gtmhx is not available
Jul  4 09:35:42.569: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:35:43.544: INFO: Wrong image for pod: daemon-set-gtmhx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jul  4 09:35:43.544: INFO: Pod daemon-set-gtmhx is not available
Jul  4 09:35:43.547: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:35:44.416: INFO: Wrong image for pod: daemon-set-gtmhx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jul  4 09:35:44.416: INFO: Pod daemon-set-gtmhx is not available
Jul  4 09:35:44.420: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:35:45.416: INFO: Wrong image for pod: daemon-set-gtmhx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jul  4 09:35:45.416: INFO: Pod daemon-set-gtmhx is not available
Jul  4 09:35:45.420: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:35:46.420: INFO: Wrong image for pod: daemon-set-gtmhx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jul  4 09:35:46.420: INFO: Pod daemon-set-gtmhx is not available
Jul  4 09:35:46.422: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:35:47.416: INFO: Pod daemon-set-vknsl is not available
Jul  4 09:35:47.420: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7636, will wait for the garbage collector to delete the pods
Jul  4 09:35:47.484: INFO: Deleting DaemonSet.extensions daemon-set took: 6.494247ms
Jul  4 09:35:47.984: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.202286ms
Jul  4 09:35:56.448: INFO: Number of nodes with available pods: 0
Jul  4 09:35:56.448: INFO: Number of running nodes: 0, number of available pods: 0
Jul  4 09:35:56.451: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7636/daemonsets","resourceVersion":"27862"},"items":null}

Jul  4 09:35:56.454: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7636/pods","resourceVersion":"27862"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:35:56.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7636" for this suite.

• [SLOW TEST:30.524 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":214,"skipped":3504,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:35:56.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0704 09:36:07.864946       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  4 09:36:07.865: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:36:07.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8056" for this suite.

• [SLOW TEST:11.421 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":215,"skipped":3507,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:36:07.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul  4 09:36:10.090: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:36:10.134: INFO: Number of nodes with available pods: 0
Jul  4 09:36:10.134: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:36:11.231: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:36:11.413: INFO: Number of nodes with available pods: 0
Jul  4 09:36:11.413: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:36:12.139: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:36:12.142: INFO: Number of nodes with available pods: 0
Jul  4 09:36:12.142: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:36:13.175: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:36:13.177: INFO: Number of nodes with available pods: 0
Jul  4 09:36:13.177: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:36:14.300: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:36:14.303: INFO: Number of nodes with available pods: 0
Jul  4 09:36:14.303: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:36:15.192: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:36:15.243: INFO: Number of nodes with available pods: 0
Jul  4 09:36:15.243: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:36:16.153: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:36:16.257: INFO: Number of nodes with available pods: 0
Jul  4 09:36:16.257: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:36:17.431: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:36:17.506: INFO: Number of nodes with available pods: 1
Jul  4 09:36:17.506: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:36:18.144: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:36:18.147: INFO: Number of nodes with available pods: 1
Jul  4 09:36:18.147: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:36:19.139: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:36:19.143: INFO: Number of nodes with available pods: 2
Jul  4 09:36:19.143: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jul  4 09:36:19.158: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:36:19.173: INFO: Number of nodes with available pods: 2
Jul  4 09:36:19.173: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4791, will wait for the garbage collector to delete the pods
Jul  4 09:36:20.403: INFO: Deleting DaemonSet.extensions daemon-set took: 5.329125ms
Jul  4 09:36:20.703: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.264522ms
Jul  4 09:36:26.807: INFO: Number of nodes with available pods: 0
Jul  4 09:36:26.807: INFO: Number of running nodes: 0, number of available pods: 0
Jul  4 09:36:26.810: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4791/daemonsets","resourceVersion":"28060"},"items":null}

Jul  4 09:36:26.812: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4791/pods","resourceVersion":"28060"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:36:26.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4791" for this suite.

• [SLOW TEST:18.935 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":216,"skipped":3534,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:36:26.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  4 09:36:27.165: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dcf6524c-36fa-4df9-a447-cb0a951435cb" in namespace "downward-api-9749" to be "success or failure"
Jul  4 09:36:27.205: INFO: Pod "downwardapi-volume-dcf6524c-36fa-4df9-a447-cb0a951435cb": Phase="Pending", Reason="", readiness=false. Elapsed: 39.485252ms
Jul  4 09:36:29.208: INFO: Pod "downwardapi-volume-dcf6524c-36fa-4df9-a447-cb0a951435cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043049947s
Jul  4 09:36:31.489: INFO: Pod "downwardapi-volume-dcf6524c-36fa-4df9-a447-cb0a951435cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324385289s
Jul  4 09:36:33.634: INFO: Pod "downwardapi-volume-dcf6524c-36fa-4df9-a447-cb0a951435cb": Phase="Running", Reason="", readiness=true. Elapsed: 6.469445948s
Jul  4 09:36:35.638: INFO: Pod "downwardapi-volume-dcf6524c-36fa-4df9-a447-cb0a951435cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.472530677s
STEP: Saw pod success
Jul  4 09:36:35.638: INFO: Pod "downwardapi-volume-dcf6524c-36fa-4df9-a447-cb0a951435cb" satisfied condition "success or failure"
Jul  4 09:36:35.640: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-dcf6524c-36fa-4df9-a447-cb0a951435cb container client-container: 
STEP: delete the pod
Jul  4 09:36:35.674: INFO: Waiting for pod downwardapi-volume-dcf6524c-36fa-4df9-a447-cb0a951435cb to disappear
Jul  4 09:36:36.091: INFO: Pod downwardapi-volume-dcf6524c-36fa-4df9-a447-cb0a951435cb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:36:36.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9749" for this suite.

• [SLOW TEST:9.534 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3548,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:36:36.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Jul  4 09:36:36.723: INFO: Waiting up to 5m0s for pod "var-expansion-cf371f53-8641-4e12-9937-e62797576e04" in namespace "var-expansion-9488" to be "success or failure"
Jul  4 09:36:36.727: INFO: Pod "var-expansion-cf371f53-8641-4e12-9937-e62797576e04": Phase="Pending", Reason="", readiness=false. Elapsed: 3.794861ms
Jul  4 09:36:39.351: INFO: Pod "var-expansion-cf371f53-8641-4e12-9937-e62797576e04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.627960629s
Jul  4 09:36:41.543: INFO: Pod "var-expansion-cf371f53-8641-4e12-9937-e62797576e04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.819926691s
Jul  4 09:36:43.646: INFO: Pod "var-expansion-cf371f53-8641-4e12-9937-e62797576e04": Phase="Pending", Reason="", readiness=false. Elapsed: 6.923600668s
Jul  4 09:36:45.982: INFO: Pod "var-expansion-cf371f53-8641-4e12-9937-e62797576e04": Phase="Pending", Reason="", readiness=false. Elapsed: 9.259730519s
Jul  4 09:36:47.998: INFO: Pod "var-expansion-cf371f53-8641-4e12-9937-e62797576e04": Phase="Pending", Reason="", readiness=false. Elapsed: 11.275274852s
Jul  4 09:36:50.001: INFO: Pod "var-expansion-cf371f53-8641-4e12-9937-e62797576e04": Phase="Pending", Reason="", readiness=false. Elapsed: 13.278141467s
Jul  4 09:36:52.068: INFO: Pod "var-expansion-cf371f53-8641-4e12-9937-e62797576e04": Phase="Pending", Reason="", readiness=false. Elapsed: 15.345563613s
Jul  4 09:36:54.090: INFO: Pod "var-expansion-cf371f53-8641-4e12-9937-e62797576e04": Phase="Pending", Reason="", readiness=false. Elapsed: 17.367511268s
Jul  4 09:36:57.335: INFO: Pod "var-expansion-cf371f53-8641-4e12-9937-e62797576e04": Phase="Pending", Reason="", readiness=false. Elapsed: 20.612625706s
Jul  4 09:36:59.339: INFO: Pod "var-expansion-cf371f53-8641-4e12-9937-e62797576e04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.615773631s
STEP: Saw pod success
Jul  4 09:36:59.339: INFO: Pod "var-expansion-cf371f53-8641-4e12-9937-e62797576e04" satisfied condition "success or failure"
Jul  4 09:36:59.340: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-cf371f53-8641-4e12-9937-e62797576e04 container dapi-container: 
STEP: delete the pod
Jul  4 09:36:59.965: INFO: Waiting for pod var-expansion-cf371f53-8641-4e12-9937-e62797576e04 to disappear
Jul  4 09:37:00.003: INFO: Pod var-expansion-cf371f53-8641-4e12-9937-e62797576e04 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:37:00.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9488" for this suite.

• [SLOW TEST:23.775 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3579,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:37:00.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-49991263-2925-4993-8227-adb72f6a1f74
STEP: Creating secret with name s-test-opt-upd-70bf662f-b17c-4eee-abe9-33cf0429179e
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-49991263-2925-4993-8227-adb72f6a1f74
STEP: Updating secret s-test-opt-upd-70bf662f-b17c-4eee-abe9-33cf0429179e
STEP: Creating secret with name s-test-opt-create-0c7980e1-4125-4c3c-b974-e3974c14cf0d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:37:32.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5424" for this suite.

• [SLOW TEST:33.416 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3594,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:37:33.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jul  4 09:37:52.311: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-149 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 09:37:52.311: INFO: >>> kubeConfig: /root/.kube/config
I0704 09:37:52.346683       6 log.go:172] (0xc002c404d0) (0xc0002f9180) Create stream
I0704 09:37:52.346717       6 log.go:172] (0xc002c404d0) (0xc0002f9180) Stream added, broadcasting: 1
I0704 09:37:52.348923       6 log.go:172] (0xc002c404d0) Reply frame received for 1
I0704 09:37:52.348960       6 log.go:172] (0xc002c404d0) (0xc0014701e0) Create stream
I0704 09:37:52.348974       6 log.go:172] (0xc002c404d0) (0xc0014701e0) Stream added, broadcasting: 3
I0704 09:37:52.350282       6 log.go:172] (0xc002c404d0) Reply frame received for 3
I0704 09:37:52.350306       6 log.go:172] (0xc002c404d0) (0xc0002f9900) Create stream
I0704 09:37:52.350318       6 log.go:172] (0xc002c404d0) (0xc0002f9900) Stream added, broadcasting: 5
I0704 09:37:52.351218       6 log.go:172] (0xc002c404d0) Reply frame received for 5
I0704 09:37:52.415720       6 log.go:172] (0xc002c404d0) Data frame received for 5
I0704 09:37:52.415773       6 log.go:172] (0xc0002f9900) (5) Data frame handling
I0704 09:37:52.415806       6 log.go:172] (0xc002c404d0) Data frame received for 3
I0704 09:37:52.415822       6 log.go:172] (0xc0014701e0) (3) Data frame handling
I0704 09:37:52.415834       6 log.go:172] (0xc0014701e0) (3) Data frame sent
I0704 09:37:52.415843       6 log.go:172] (0xc002c404d0) Data frame received for 3
I0704 09:37:52.415862       6 log.go:172] (0xc0014701e0) (3) Data frame handling
I0704 09:37:52.417468       6 log.go:172] (0xc002c404d0) Data frame received for 1
I0704 09:37:52.417510       6 log.go:172] (0xc0002f9180) (1) Data frame handling
I0704 09:37:52.417537       6 log.go:172] (0xc0002f9180) (1) Data frame sent
I0704 09:37:52.417557       6 log.go:172] (0xc002c404d0) (0xc0002f9180) Stream removed, broadcasting: 1
I0704 09:37:52.417592       6 log.go:172] (0xc002c404d0) Go away received
I0704 09:37:52.417690       6 log.go:172] (0xc002c404d0) (0xc0002f9180) Stream removed, broadcasting: 1
I0704 09:37:52.417716       6 log.go:172] (0xc002c404d0) (0xc0014701e0) Stream removed, broadcasting: 3
I0704 09:37:52.417731       6 log.go:172] (0xc002c404d0) (0xc0002f9900) Stream removed, broadcasting: 5
Jul  4 09:37:52.417: INFO: Exec stderr: ""
Jul  4 09:37:52.417: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-149 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 09:37:52.417: INFO: >>> kubeConfig: /root/.kube/config
I0704 09:37:52.450021       6 log.go:172] (0xc002c40b00) (0xc0028ee0a0) Create stream
I0704 09:37:52.450047       6 log.go:172] (0xc002c40b00) (0xc0028ee0a0) Stream added, broadcasting: 1
I0704 09:37:52.452283       6 log.go:172] (0xc002c40b00) Reply frame received for 1
I0704 09:37:52.452318       6 log.go:172] (0xc002c40b00) (0xc001411180) Create stream
I0704 09:37:52.452330       6 log.go:172] (0xc002c40b00) (0xc001411180) Stream added, broadcasting: 3
I0704 09:37:52.453729       6 log.go:172] (0xc002c40b00) Reply frame received for 3
I0704 09:37:52.453774       6 log.go:172] (0xc002c40b00) (0xc0014112c0) Create stream
I0704 09:37:52.453790       6 log.go:172] (0xc002c40b00) (0xc0014112c0) Stream added, broadcasting: 5
I0704 09:37:52.454733       6 log.go:172] (0xc002c40b00) Reply frame received for 5
I0704 09:37:52.524064       6 log.go:172] (0xc002c40b00) Data frame received for 5
I0704 09:37:52.524102       6 log.go:172] (0xc0014112c0) (5) Data frame handling
I0704 09:37:52.524129       6 log.go:172] (0xc002c40b00) Data frame received for 3
I0704 09:37:52.524141       6 log.go:172] (0xc001411180) (3) Data frame handling
I0704 09:37:52.524154       6 log.go:172] (0xc001411180) (3) Data frame sent
I0704 09:37:52.524175       6 log.go:172] (0xc002c40b00) Data frame received for 3
I0704 09:37:52.524200       6 log.go:172] (0xc001411180) (3) Data frame handling
I0704 09:37:52.525760       6 log.go:172] (0xc002c40b00) Data frame received for 1
I0704 09:37:52.525796       6 log.go:172] (0xc0028ee0a0) (1) Data frame handling
I0704 09:37:52.525821       6 log.go:172] (0xc0028ee0a0) (1) Data frame sent
I0704 09:37:52.525846       6 log.go:172] (0xc002c40b00) (0xc0028ee0a0) Stream removed, broadcasting: 1
I0704 09:37:52.525865       6 log.go:172] (0xc002c40b00) Go away received
I0704 09:37:52.526014       6 log.go:172] (0xc002c40b00) (0xc0028ee0a0) Stream removed, broadcasting: 1
I0704 09:37:52.526039       6 log.go:172] (0xc002c40b00) (0xc001411180) Stream removed, broadcasting: 3
I0704 09:37:52.526055       6 log.go:172] (0xc002c40b00) (0xc0014112c0) Stream removed, broadcasting: 5
Jul  4 09:37:52.526: INFO: Exec stderr: ""
Jul  4 09:37:52.526: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-149 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 09:37:52.526: INFO: >>> kubeConfig: /root/.kube/config
I0704 09:37:52.560810       6 log.go:172] (0xc0025fe4d0) (0xc001411900) Create stream
I0704 09:37:52.560844       6 log.go:172] (0xc0025fe4d0) (0xc001411900) Stream added, broadcasting: 1
I0704 09:37:52.567078       6 log.go:172] (0xc0025fe4d0) Reply frame received for 1
I0704 09:37:52.567118       6 log.go:172] (0xc0025fe4d0) (0xc000b01b80) Create stream
I0704 09:37:52.567130       6 log.go:172] (0xc0025fe4d0) (0xc000b01b80) Stream added, broadcasting: 3
I0704 09:37:52.568190       6 log.go:172] (0xc0025fe4d0) Reply frame received for 3
I0704 09:37:52.568230       6 log.go:172] (0xc0025fe4d0) (0xc001470320) Create stream
I0704 09:37:52.568246       6 log.go:172] (0xc0025fe4d0) (0xc001470320) Stream added, broadcasting: 5
I0704 09:37:52.569048       6 log.go:172] (0xc0025fe4d0) Reply frame received for 5
I0704 09:37:52.637476       6 log.go:172] (0xc0025fe4d0) Data frame received for 5
I0704 09:37:52.637503       6 log.go:172] (0xc001470320) (5) Data frame handling
I0704 09:37:52.637525       6 log.go:172] (0xc0025fe4d0) Data frame received for 3
I0704 09:37:52.637553       6 log.go:172] (0xc000b01b80) (3) Data frame handling
I0704 09:37:52.637587       6 log.go:172] (0xc000b01b80) (3) Data frame sent
I0704 09:37:52.637654       6 log.go:172] (0xc0025fe4d0) Data frame received for 3
I0704 09:37:52.637678       6 log.go:172] (0xc000b01b80) (3) Data frame handling
I0704 09:37:52.639344       6 log.go:172] (0xc0025fe4d0) Data frame received for 1
I0704 09:37:52.639377       6 log.go:172] (0xc001411900) (1) Data frame handling
I0704 09:37:52.639405       6 log.go:172] (0xc001411900) (1) Data frame sent
I0704 09:37:52.639432       6 log.go:172] (0xc0025fe4d0) (0xc001411900) Stream removed, broadcasting: 1
I0704 09:37:52.639456       6 log.go:172] (0xc0025fe4d0) Go away received
I0704 09:37:52.639554       6 log.go:172] (0xc0025fe4d0) (0xc001411900) Stream removed, broadcasting: 1
I0704 09:37:52.639567       6 log.go:172] (0xc0025fe4d0) (0xc000b01b80) Stream removed, broadcasting: 3
I0704 09:37:52.639573       6 log.go:172] (0xc0025fe4d0) (0xc001470320) Stream removed, broadcasting: 5
Jul  4 09:37:52.639: INFO: Exec stderr: ""
Jul  4 09:37:52.639: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-149 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 09:37:52.639: INFO: >>> kubeConfig: /root/.kube/config
I0704 09:37:52.886752       6 log.go:172] (0xc0044906e0) (0xc0022e2500) Create stream
I0704 09:37:52.886784       6 log.go:172] (0xc0044906e0) (0xc0022e2500) Stream added, broadcasting: 1
I0704 09:37:52.888727       6 log.go:172] (0xc0044906e0) Reply frame received for 1
I0704 09:37:52.888772       6 log.go:172] (0xc0044906e0) (0xc0016ca280) Create stream
I0704 09:37:52.888790       6 log.go:172] (0xc0044906e0) (0xc0016ca280) Stream added, broadcasting: 3
I0704 09:37:52.889831       6 log.go:172] (0xc0044906e0) Reply frame received for 3
I0704 09:37:52.889880       6 log.go:172] (0xc0044906e0) (0xc0016ca320) Create stream
I0704 09:37:52.889893       6 log.go:172] (0xc0044906e0) (0xc0016ca320) Stream added, broadcasting: 5
I0704 09:37:52.890816       6 log.go:172] (0xc0044906e0) Reply frame received for 5
I0704 09:37:52.954280       6 log.go:172] (0xc0044906e0) Data frame received for 5
I0704 09:37:52.954317       6 log.go:172] (0xc0016ca320) (5) Data frame handling
I0704 09:37:52.954348       6 log.go:172] (0xc0044906e0) Data frame received for 3
I0704 09:37:52.954359       6 log.go:172] (0xc0016ca280) (3) Data frame handling
I0704 09:37:52.954374       6 log.go:172] (0xc0016ca280) (3) Data frame sent
I0704 09:37:52.954394       6 log.go:172] (0xc0044906e0) Data frame received for 3
I0704 09:37:52.954405       6 log.go:172] (0xc0016ca280) (3) Data frame handling
I0704 09:37:52.955501       6 log.go:172] (0xc0044906e0) Data frame received for 1
I0704 09:37:52.955537       6 log.go:172] (0xc0022e2500) (1) Data frame handling
I0704 09:37:52.955557       6 log.go:172] (0xc0022e2500) (1) Data frame sent
I0704 09:37:52.955568       6 log.go:172] (0xc0044906e0) (0xc0022e2500) Stream removed, broadcasting: 1
I0704 09:37:52.955580       6 log.go:172] (0xc0044906e0) Go away received
I0704 09:37:52.955713       6 log.go:172] (0xc0044906e0) (0xc0022e2500) Stream removed, broadcasting: 1
I0704 09:37:52.955731       6 log.go:172] (0xc0044906e0) (0xc0016ca280) Stream removed, broadcasting: 3
I0704 09:37:52.955740       6 log.go:172] (0xc0044906e0) (0xc0016ca320) Stream removed, broadcasting: 5
Jul  4 09:37:52.955: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jul  4 09:37:52.955: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-149 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 09:37:52.955: INFO: >>> kubeConfig: /root/.kube/config
I0704 09:37:53.172294       6 log.go:172] (0xc0015da370) (0xc001470820) Create stream
I0704 09:37:53.172336       6 log.go:172] (0xc0015da370) (0xc001470820) Stream added, broadcasting: 1
I0704 09:37:53.174703       6 log.go:172] (0xc0015da370) Reply frame received for 1
I0704 09:37:53.174759       6 log.go:172] (0xc0015da370) (0xc0014708c0) Create stream
I0704 09:37:53.174785       6 log.go:172] (0xc0015da370) (0xc0014708c0) Stream added, broadcasting: 3
I0704 09:37:53.176654       6 log.go:172] (0xc0015da370) Reply frame received for 3
I0704 09:37:53.176766       6 log.go:172] (0xc0015da370) (0xc0016ca500) Create stream
I0704 09:37:53.176830       6 log.go:172] (0xc0015da370) (0xc0016ca500) Stream added, broadcasting: 5
I0704 09:37:53.178124       6 log.go:172] (0xc0015da370) Reply frame received for 5
I0704 09:37:53.233963       6 log.go:172] (0xc0015da370) Data frame received for 5
I0704 09:37:53.234005       6 log.go:172] (0xc0016ca500) (5) Data frame handling
I0704 09:37:53.234027       6 log.go:172] (0xc0015da370) Data frame received for 3
I0704 09:37:53.234042       6 log.go:172] (0xc0014708c0) (3) Data frame handling
I0704 09:37:53.234056       6 log.go:172] (0xc0014708c0) (3) Data frame sent
I0704 09:37:53.234067       6 log.go:172] (0xc0015da370) Data frame received for 3
I0704 09:37:53.234078       6 log.go:172] (0xc0014708c0) (3) Data frame handling
I0704 09:37:53.235448       6 log.go:172] (0xc0015da370) Data frame received for 1
I0704 09:37:53.235480       6 log.go:172] (0xc001470820) (1) Data frame handling
I0704 09:37:53.235499       6 log.go:172] (0xc001470820) (1) Data frame sent
I0704 09:37:53.235513       6 log.go:172] (0xc0015da370) (0xc001470820) Stream removed, broadcasting: 1
I0704 09:37:53.235541       6 log.go:172] (0xc0015da370) Go away received
I0704 09:37:53.235594       6 log.go:172] (0xc0015da370) (0xc001470820) Stream removed, broadcasting: 1
I0704 09:37:53.235603       6 log.go:172] (0xc0015da370) (0xc0014708c0) Stream removed, broadcasting: 3
I0704 09:37:53.235609       6 log.go:172] (0xc0015da370) (0xc0016ca500) Stream removed, broadcasting: 5
Jul  4 09:37:53.235: INFO: Exec stderr: ""
Jul  4 09:37:53.235: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-149 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 09:37:53.235: INFO: >>> kubeConfig: /root/.kube/config
I0704 09:37:53.266581       6 log.go:172] (0xc0015dab00) (0xc001470d20) Create stream
I0704 09:37:53.266621       6 log.go:172] (0xc0015dab00) (0xc001470d20) Stream added, broadcasting: 1
I0704 09:37:53.268749       6 log.go:172] (0xc0015dab00) Reply frame received for 1
I0704 09:37:53.268804       6 log.go:172] (0xc0015dab00) (0xc0028ee1e0) Create stream
I0704 09:37:53.268820       6 log.go:172] (0xc0015dab00) (0xc0028ee1e0) Stream added, broadcasting: 3
I0704 09:37:53.270059       6 log.go:172] (0xc0015dab00) Reply frame received for 3
I0704 09:37:53.270110       6 log.go:172] (0xc0015dab00) (0xc0022e25a0) Create stream
I0704 09:37:53.270130       6 log.go:172] (0xc0015dab00) (0xc0022e25a0) Stream added, broadcasting: 5
I0704 09:37:53.271060       6 log.go:172] (0xc0015dab00) Reply frame received for 5
I0704 09:37:53.337866       6 log.go:172] (0xc0015dab00) Data frame received for 3
I0704 09:37:53.337911       6 log.go:172] (0xc0028ee1e0) (3) Data frame handling
I0704 09:37:53.337930       6 log.go:172] (0xc0028ee1e0) (3) Data frame sent
I0704 09:37:53.337951       6 log.go:172] (0xc0015dab00) Data frame received for 5
I0704 09:37:53.337963       6 log.go:172] (0xc0022e25a0) (5) Data frame handling
I0704 09:37:53.338025       6 log.go:172] (0xc0015dab00) Data frame received for 3
I0704 09:37:53.338048       6 log.go:172] (0xc0028ee1e0) (3) Data frame handling
I0704 09:37:53.339671       6 log.go:172] (0xc0015dab00) Data frame received for 1
I0704 09:37:53.339704       6 log.go:172] (0xc001470d20) (1) Data frame handling
I0704 09:37:53.339725       6 log.go:172] (0xc001470d20) (1) Data frame sent
I0704 09:37:53.339750       6 log.go:172] (0xc0015dab00) (0xc001470d20) Stream removed, broadcasting: 1
I0704 09:37:53.339771       6 log.go:172] (0xc0015dab00) Go away received
I0704 09:37:53.339898       6 log.go:172] (0xc0015dab00) (0xc001470d20) Stream removed, broadcasting: 1
I0704 09:37:53.339923       6 log.go:172] (0xc0015dab00) (0xc0028ee1e0) Stream removed, broadcasting: 3
I0704 09:37:53.339945       6 log.go:172] (0xc0015dab00) (0xc0022e25a0) Stream removed, broadcasting: 5
Jul  4 09:37:53.339: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jul  4 09:37:53.339: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-149 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 09:37:53.340: INFO: >>> kubeConfig: /root/.kube/config
I0704 09:37:53.424251       6 log.go:172] (0xc002d14790) (0xc0016caaa0) Create stream
I0704 09:37:53.424288       6 log.go:172] (0xc002d14790) (0xc0016caaa0) Stream added, broadcasting: 1
I0704 09:37:53.426534       6 log.go:172] (0xc002d14790) Reply frame received for 1
I0704 09:37:53.426584       6 log.go:172] (0xc002d14790) (0xc001470e60) Create stream
I0704 09:37:53.426606       6 log.go:172] (0xc002d14790) (0xc001470e60) Stream added, broadcasting: 3
I0704 09:37:53.427762       6 log.go:172] (0xc002d14790) Reply frame received for 3
I0704 09:37:53.427814       6 log.go:172] (0xc002d14790) (0xc0028ee280) Create stream
I0704 09:37:53.427830       6 log.go:172] (0xc002d14790) (0xc0028ee280) Stream added, broadcasting: 5
I0704 09:37:53.428839       6 log.go:172] (0xc002d14790) Reply frame received for 5
I0704 09:37:53.512213       6 log.go:172] (0xc002d14790) Data frame received for 3
I0704 09:37:53.512238       6 log.go:172] (0xc001470e60) (3) Data frame handling
I0704 09:37:53.512252       6 log.go:172] (0xc001470e60) (3) Data frame sent
I0704 09:37:53.512273       6 log.go:172] (0xc002d14790) Data frame received for 5
I0704 09:37:53.512315       6 log.go:172] (0xc0028ee280) (5) Data frame handling
I0704 09:37:53.512340       6 log.go:172] (0xc002d14790) Data frame received for 3
I0704 09:37:53.512352       6 log.go:172] (0xc001470e60) (3) Data frame handling
I0704 09:37:53.513861       6 log.go:172] (0xc002d14790) Data frame received for 1
I0704 09:37:53.513888       6 log.go:172] (0xc0016caaa0) (1) Data frame handling
I0704 09:37:53.513907       6 log.go:172] (0xc0016caaa0) (1) Data frame sent
I0704 09:37:53.513922       6 log.go:172] (0xc002d14790) (0xc0016caaa0) Stream removed, broadcasting: 1
I0704 09:37:53.513940       6 log.go:172] (0xc002d14790) Go away received
I0704 09:37:53.514028       6 log.go:172] (0xc002d14790) (0xc0016caaa0) Stream removed, broadcasting: 1
I0704 09:37:53.514049       6 log.go:172] (0xc002d14790) (0xc001470e60) Stream removed, broadcasting: 3
I0704 09:37:53.514065       6 log.go:172] (0xc002d14790) (0xc0028ee280) Stream removed, broadcasting: 5
Jul  4 09:37:53.514: INFO: Exec stderr: ""
Jul  4 09:37:53.514: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-149 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 09:37:53.514: INFO: >>> kubeConfig: /root/.kube/config
I0704 09:37:53.546577       6 log.go:172] (0xc0025feb00) (0xc0022705a0) Create stream
I0704 09:37:53.546615       6 log.go:172] (0xc0025feb00) (0xc0022705a0) Stream added, broadcasting: 1
I0704 09:37:53.548821       6 log.go:172] (0xc0025feb00) Reply frame received for 1
I0704 09:37:53.548857       6 log.go:172] (0xc0025feb00) (0xc002270640) Create stream
I0704 09:37:53.548871       6 log.go:172] (0xc0025feb00) (0xc002270640) Stream added, broadcasting: 3
I0704 09:37:53.550097       6 log.go:172] (0xc0025feb00) Reply frame received for 3
I0704 09:37:53.550151       6 log.go:172] (0xc0025feb00) (0xc0016cab40) Create stream
I0704 09:37:53.550178       6 log.go:172] (0xc0025feb00) (0xc0016cab40) Stream added, broadcasting: 5
I0704 09:37:53.551334       6 log.go:172] (0xc0025feb00) Reply frame received for 5
I0704 09:37:53.612635       6 log.go:172] (0xc0025feb00) Data frame received for 5
I0704 09:37:53.612670       6 log.go:172] (0xc0016cab40) (5) Data frame handling
I0704 09:37:53.612693       6 log.go:172] (0xc0025feb00) Data frame received for 3
I0704 09:37:53.612705       6 log.go:172] (0xc002270640) (3) Data frame handling
I0704 09:37:53.612719       6 log.go:172] (0xc002270640) (3) Data frame sent
I0704 09:37:53.612731       6 log.go:172] (0xc0025feb00) Data frame received for 3
I0704 09:37:53.612744       6 log.go:172] (0xc002270640) (3) Data frame handling
I0704 09:37:53.614400       6 log.go:172] (0xc0025feb00) Data frame received for 1
I0704 09:37:53.614420       6 log.go:172] (0xc0022705a0) (1) Data frame handling
I0704 09:37:53.614430       6 log.go:172] (0xc0022705a0) (1) Data frame sent
I0704 09:37:53.614524       6 log.go:172] (0xc0025feb00) (0xc0022705a0) Stream removed, broadcasting: 1
I0704 09:37:53.614590       6 log.go:172] (0xc0025feb00) Go away received
I0704 09:37:53.614630       6 log.go:172] (0xc0025feb00) (0xc0022705a0) Stream removed, broadcasting: 1
I0704 09:37:53.614649       6 log.go:172] (0xc0025feb00) (0xc002270640) Stream removed, broadcasting: 3
I0704 09:37:53.614657       6 log.go:172] (0xc0025feb00) (0xc0016cab40) Stream removed, broadcasting: 5
Jul  4 09:37:53.614: INFO: Exec stderr: ""
Jul  4 09:37:53.614: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-149 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 09:37:53.614: INFO: >>> kubeConfig: /root/.kube/config
I0704 09:37:53.639015       6 log.go:172] (0xc002c41290) (0xc0028ee460) Create stream
I0704 09:37:53.639037       6 log.go:172] (0xc002c41290) (0xc0028ee460) Stream added, broadcasting: 1
I0704 09:37:53.640980       6 log.go:172] (0xc002c41290) Reply frame received for 1
I0704 09:37:53.641005       6 log.go:172] (0xc002c41290) (0xc0028ee500) Create stream
I0704 09:37:53.641018       6 log.go:172] (0xc002c41290) (0xc0028ee500) Stream added, broadcasting: 3
I0704 09:37:53.642034       6 log.go:172] (0xc002c41290) Reply frame received for 3
I0704 09:37:53.642074       6 log.go:172] (0xc002c41290) (0xc0022706e0) Create stream
I0704 09:37:53.642088       6 log.go:172] (0xc002c41290) (0xc0022706e0) Stream added, broadcasting: 5
I0704 09:37:53.642823       6 log.go:172] (0xc002c41290) Reply frame received for 5
I0704 09:37:53.697703       6 log.go:172] (0xc002c41290) Data frame received for 3
I0704 09:37:53.697725       6 log.go:172] (0xc0028ee500) (3) Data frame handling
I0704 09:37:53.697734       6 log.go:172] (0xc0028ee500) (3) Data frame sent
I0704 09:37:53.697741       6 log.go:172] (0xc002c41290) Data frame received for 3
I0704 09:37:53.697749       6 log.go:172] (0xc0028ee500) (3) Data frame handling
I0704 09:37:53.697762       6 log.go:172] (0xc002c41290) Data frame received for 5
I0704 09:37:53.697771       6 log.go:172] (0xc0022706e0) (5) Data frame handling
I0704 09:37:53.698588       6 log.go:172] (0xc002c41290) Data frame received for 1
I0704 09:37:53.698618       6 log.go:172] (0xc0028ee460) (1) Data frame handling
I0704 09:37:53.698629       6 log.go:172] (0xc0028ee460) (1) Data frame sent
I0704 09:37:53.698641       6 log.go:172] (0xc002c41290) (0xc0028ee460) Stream removed, broadcasting: 1
I0704 09:37:53.698662       6 log.go:172] (0xc002c41290) Go away received
I0704 09:37:53.698776       6 log.go:172] (0xc002c41290) (0xc0028ee460) Stream removed, broadcasting: 1
I0704 09:37:53.698792       6 log.go:172] (0xc002c41290) (0xc0028ee500) Stream removed, broadcasting: 3
I0704 09:37:53.698800       6 log.go:172] (0xc002c41290) (0xc0022706e0) Stream removed, broadcasting: 5
Jul  4 09:37:53.698: INFO: Exec stderr: ""
Jul  4 09:37:53.698: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-149 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  4 09:37:53.698: INFO: >>> kubeConfig: /root/.kube/config
I0704 09:37:53.717891       6 log.go:172] (0xc002d14e70) (0xc0016cb0e0) Create stream
I0704 09:37:53.717907       6 log.go:172] (0xc002d14e70) (0xc0016cb0e0) Stream added, broadcasting: 1
I0704 09:37:53.719495       6 log.go:172] (0xc002d14e70) Reply frame received for 1
I0704 09:37:53.719531       6 log.go:172] (0xc002d14e70) (0xc001470f00) Create stream
I0704 09:37:53.719543       6 log.go:172] (0xc002d14e70) (0xc001470f00) Stream added, broadcasting: 3
I0704 09:37:53.720476       6 log.go:172] (0xc002d14e70) Reply frame received for 3
I0704 09:37:53.720512       6 log.go:172] (0xc002d14e70) (0xc001470fa0) Create stream
I0704 09:37:53.720525       6 log.go:172] (0xc002d14e70) (0xc001470fa0) Stream added, broadcasting: 5
I0704 09:37:53.721622       6 log.go:172] (0xc002d14e70) Reply frame received for 5
I0704 09:37:53.786127       6 log.go:172] (0xc002d14e70) Data frame received for 5
I0704 09:37:53.786170       6 log.go:172] (0xc001470fa0) (5) Data frame handling
I0704 09:37:53.786197       6 log.go:172] (0xc002d14e70) Data frame received for 3
I0704 09:37:53.786210       6 log.go:172] (0xc001470f00) (3) Data frame handling
I0704 09:37:53.786227       6 log.go:172] (0xc001470f00) (3) Data frame sent
I0704 09:37:53.786247       6 log.go:172] (0xc002d14e70) Data frame received for 3
I0704 09:37:53.786276       6 log.go:172] (0xc001470f00) (3) Data frame handling
I0704 09:37:53.787357       6 log.go:172] (0xc002d14e70) Data frame received for 1
I0704 09:37:53.787397       6 log.go:172] (0xc0016cb0e0) (1) Data frame handling
I0704 09:37:53.787445       6 log.go:172] (0xc0016cb0e0) (1) Data frame sent
I0704 09:37:53.787495       6 log.go:172] (0xc002d14e70) (0xc0016cb0e0) Stream removed, broadcasting: 1
I0704 09:37:53.787536       6 log.go:172] (0xc002d14e70) Go away received
I0704 09:37:53.787602       6 log.go:172] (0xc002d14e70) (0xc0016cb0e0) Stream removed, broadcasting: 1
I0704 09:37:53.787622       6 log.go:172] (0xc002d14e70) (0xc001470f00) Stream removed, broadcasting: 3
I0704 09:37:53.787629       6 log.go:172] (0xc002d14e70) (0xc001470fa0) Stream removed, broadcasting: 5
Jul  4 09:37:53.787: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:37:53.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-149" for this suite.

• [SLOW TEST:20.239 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3618,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:37:53.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jul  4 09:37:54.356: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jul  4 09:37:56.576: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452274, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452274, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452274, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452274, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  4 09:37:59.591: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:37:59.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:38:00.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-8897" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:7.164 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":221,"skipped":3675,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:38:00.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:38:13.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7470" for this suite.

• [SLOW TEST:12.985 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":222,"skipped":3676,"failed":0}
SSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:38:13.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jul  4 09:38:24.465: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jul  4 09:38:29.730: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:38:29.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3953" for this suite.

• [SLOW TEST:15.796 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":223,"skipped":3681,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:38:29.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  4 09:38:32.491: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  4 09:38:35.380: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452312, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:38:38.129: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452312, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:38:39.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452312, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:38:42.508: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452312, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:38:43.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452312, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:38:45.638: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452312, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:38:47.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452312, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:38:52.307: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452312, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:38:56.104: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452313, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452312, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  4 09:38:59.098: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:39:15.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2135" for this suite.
STEP: Destroying namespace "webhook-2135-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:51.994 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":224,"skipped":3707,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:39:21.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul  4 09:39:24.487: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:24.489: INFO: Number of nodes with available pods: 0
Jul  4 09:39:24.489: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:39:25.872: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:26.386: INFO: Number of nodes with available pods: 0
Jul  4 09:39:26.386: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:39:26.675: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:26.678: INFO: Number of nodes with available pods: 0
Jul  4 09:39:26.678: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:39:27.494: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:27.847: INFO: Number of nodes with available pods: 0
Jul  4 09:39:27.847: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:39:28.511: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:28.514: INFO: Number of nodes with available pods: 0
Jul  4 09:39:28.514: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:39:29.494: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:29.497: INFO: Number of nodes with available pods: 0
Jul  4 09:39:29.497: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:39:30.823: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:30.826: INFO: Number of nodes with available pods: 0
Jul  4 09:39:30.826: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:39:31.493: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:31.495: INFO: Number of nodes with available pods: 0
Jul  4 09:39:31.495: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 09:39:32.492: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:32.495: INFO: Number of nodes with available pods: 2
Jul  4 09:39:32.495: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jul  4 09:39:32.509: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:32.511: INFO: Number of nodes with available pods: 1
Jul  4 09:39:32.511: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:39:33.515: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:33.519: INFO: Number of nodes with available pods: 1
Jul  4 09:39:33.519: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:39:34.619: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:34.622: INFO: Number of nodes with available pods: 1
Jul  4 09:39:34.622: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:39:35.515: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:35.518: INFO: Number of nodes with available pods: 1
Jul  4 09:39:35.518: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:39:36.523: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:36.525: INFO: Number of nodes with available pods: 1
Jul  4 09:39:36.526: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:39:37.535: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:37.538: INFO: Number of nodes with available pods: 1
Jul  4 09:39:37.538: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:39:38.650: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:38.654: INFO: Number of nodes with available pods: 1
Jul  4 09:39:38.654: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:39:39.516: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:39.519: INFO: Number of nodes with available pods: 1
Jul  4 09:39:39.519: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:39:40.515: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:40.518: INFO: Number of nodes with available pods: 1
Jul  4 09:39:40.518: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:39:41.514: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:41.516: INFO: Number of nodes with available pods: 1
Jul  4 09:39:41.516: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:39:42.515: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:42.518: INFO: Number of nodes with available pods: 1
Jul  4 09:39:42.518: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:39:43.530: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:43.533: INFO: Number of nodes with available pods: 1
Jul  4 09:39:43.533: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:39:44.515: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:44.518: INFO: Number of nodes with available pods: 1
Jul  4 09:39:44.518: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:39:45.516: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:45.519: INFO: Number of nodes with available pods: 1
Jul  4 09:39:45.519: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:39:46.514: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:46.516: INFO: Number of nodes with available pods: 1
Jul  4 09:39:46.516: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:39:47.515: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:47.534: INFO: Number of nodes with available pods: 1
Jul  4 09:39:47.534: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:39:48.596: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:48.599: INFO: Number of nodes with available pods: 1
Jul  4 09:39:48.599: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:39:49.536: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:49.539: INFO: Number of nodes with available pods: 1
Jul  4 09:39:49.539: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 09:39:50.515: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 09:39:50.517: INFO: Number of nodes with available pods: 2
Jul  4 09:39:50.517: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8820, will wait for the garbage collector to delete the pods
Jul  4 09:39:50.664: INFO: Deleting DaemonSet.extensions daemon-set took: 92.890305ms
Jul  4 09:39:51.064: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.204255ms
Jul  4 09:39:56.867: INFO: Number of nodes with available pods: 0
Jul  4 09:39:56.867: INFO: Number of running nodes: 0, number of available pods: 0
Jul  4 09:39:56.870: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8820/daemonsets","resourceVersion":"28976"},"items":null}

Jul  4 09:39:56.872: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8820/pods","resourceVersion":"28976"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:39:56.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8820" for this suite.

• [SLOW TEST:35.150 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":225,"skipped":3717,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:39:56.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-f9bs
STEP: Creating a pod to test atomic-volume-subpath
Jul  4 09:39:56.982: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-f9bs" in namespace "subpath-7811" to be "success or failure"
Jul  4 09:39:56.986: INFO: Pod "pod-subpath-test-configmap-f9bs": Phase="Pending", Reason="", readiness=false. Elapsed: 3.210989ms
Jul  4 09:39:58.989: INFO: Pod "pod-subpath-test-configmap-f9bs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00690868s
Jul  4 09:40:00.992: INFO: Pod "pod-subpath-test-configmap-f9bs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00987868s
Jul  4 09:40:02.996: INFO: Pod "pod-subpath-test-configmap-f9bs": Phase="Running", Reason="", readiness=true. Elapsed: 6.013540642s
Jul  4 09:40:04.999: INFO: Pod "pod-subpath-test-configmap-f9bs": Phase="Running", Reason="", readiness=true. Elapsed: 8.016827904s
Jul  4 09:40:07.002: INFO: Pod "pod-subpath-test-configmap-f9bs": Phase="Running", Reason="", readiness=true. Elapsed: 10.01984526s
Jul  4 09:40:09.006: INFO: Pod "pod-subpath-test-configmap-f9bs": Phase="Running", Reason="", readiness=true. Elapsed: 12.023247844s
Jul  4 09:40:11.009: INFO: Pod "pod-subpath-test-configmap-f9bs": Phase="Running", Reason="", readiness=true. Elapsed: 14.026214408s
Jul  4 09:40:13.011: INFO: Pod "pod-subpath-test-configmap-f9bs": Phase="Running", Reason="", readiness=true. Elapsed: 16.02875353s
Jul  4 09:40:15.015: INFO: Pod "pod-subpath-test-configmap-f9bs": Phase="Running", Reason="", readiness=true. Elapsed: 18.032643649s
Jul  4 09:40:17.018: INFO: Pod "pod-subpath-test-configmap-f9bs": Phase="Running", Reason="", readiness=true. Elapsed: 20.035772454s
Jul  4 09:40:19.022: INFO: Pod "pod-subpath-test-configmap-f9bs": Phase="Running", Reason="", readiness=true. Elapsed: 22.039230147s
Jul  4 09:40:21.422: INFO: Pod "pod-subpath-test-configmap-f9bs": Phase="Running", Reason="", readiness=true. Elapsed: 24.439775639s
Jul  4 09:40:23.425: INFO: Pod "pod-subpath-test-configmap-f9bs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.443111508s
STEP: Saw pod success
Jul  4 09:40:23.426: INFO: Pod "pod-subpath-test-configmap-f9bs" satisfied condition "success or failure"
Jul  4 09:40:23.428: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-f9bs container test-container-subpath-configmap-f9bs: 
STEP: delete the pod
Jul  4 09:40:23.777: INFO: Waiting for pod pod-subpath-test-configmap-f9bs to disappear
Jul  4 09:40:23.805: INFO: Pod pod-subpath-test-configmap-f9bs no longer exists
STEP: Deleting pod pod-subpath-test-configmap-f9bs
Jul  4 09:40:23.805: INFO: Deleting pod "pod-subpath-test-configmap-f9bs" in namespace "subpath-7811"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:40:23.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7811" for this suite.

• [SLOW TEST:26.937 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":226,"skipped":3735,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:40:23.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:40:24.464: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"2677d3e4-0ca4-4528-bdb4-2d5a120c96cc", Controller:(*bool)(0xc003d96c6e), BlockOwnerDeletion:(*bool)(0xc003d96c6f)}}
Jul  4 09:40:24.515: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"30f52958-bd93-4f12-a713-8d34ca10e88f", Controller:(*bool)(0xc0055714ce), BlockOwnerDeletion:(*bool)(0xc0055714cf)}}
Jul  4 09:40:24.578: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c1a26165-353c-4623-a2fa-c522c09f1714", Controller:(*bool)(0xc003d96e1a), BlockOwnerDeletion:(*bool)(0xc003d96e1b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:40:29.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3318" for this suite.

• [SLOW TEST:6.015 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":227,"skipped":3755,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:40:29.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  4 09:40:31.304: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  4 09:40:33.315: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452431, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452431, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452431, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452431, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:40:35.364: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452431, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452431, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452431, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452431, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  4 09:40:39.462: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:40:39.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1340-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:40:42.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5888" for this suite.
STEP: Destroying namespace "webhook-5888-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.075 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":228,"skipped":3802,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:40:42.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:40:43.035: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:40:53.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6968" for this suite.

• [SLOW TEST:10.301 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":229,"skipped":3808,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:40:53.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:40:53.384: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-43619431-d97f-457d-936b-28bf167da2e4" in namespace "security-context-test-7914" to be "success or failure"
Jul  4 09:40:53.411: INFO: Pod "busybox-readonly-false-43619431-d97f-457d-936b-28bf167da2e4": Phase="Pending", Reason="", readiness=false. Elapsed: 26.287935ms
Jul  4 09:40:56.111: INFO: Pod "busybox-readonly-false-43619431-d97f-457d-936b-28bf167da2e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.72685116s
Jul  4 09:40:58.115: INFO: Pod "busybox-readonly-false-43619431-d97f-457d-936b-28bf167da2e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.730294577s
Jul  4 09:41:00.119: INFO: Pod "busybox-readonly-false-43619431-d97f-457d-936b-28bf167da2e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.734604452s
Jul  4 09:41:03.566: INFO: Pod "busybox-readonly-false-43619431-d97f-457d-936b-28bf167da2e4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.181469089s
Jul  4 09:41:05.570: INFO: Pod "busybox-readonly-false-43619431-d97f-457d-936b-28bf167da2e4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.185571542s
Jul  4 09:41:07.573: INFO: Pod "busybox-readonly-false-43619431-d97f-457d-936b-28bf167da2e4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.188604203s
Jul  4 09:41:09.577: INFO: Pod "busybox-readonly-false-43619431-d97f-457d-936b-28bf167da2e4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.193042087s
Jul  4 09:41:11.582: INFO: Pod "busybox-readonly-false-43619431-d97f-457d-936b-28bf167da2e4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.197878846s
Jul  4 09:41:14.181: INFO: Pod "busybox-readonly-false-43619431-d97f-457d-936b-28bf167da2e4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.796901114s
Jul  4 09:41:17.513: INFO: Pod "busybox-readonly-false-43619431-d97f-457d-936b-28bf167da2e4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.128629462s
Jul  4 09:41:19.517: INFO: Pod "busybox-readonly-false-43619431-d97f-457d-936b-28bf167da2e4": Phase="Running", Reason="", readiness=true. Elapsed: 26.132948753s
Jul  4 09:41:21.657: INFO: Pod "busybox-readonly-false-43619431-d97f-457d-936b-28bf167da2e4": Phase="Running", Reason="", readiness=true. Elapsed: 28.272909009s
Jul  4 09:41:23.843: INFO: Pod "busybox-readonly-false-43619431-d97f-457d-936b-28bf167da2e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.458697013s
Jul  4 09:41:23.843: INFO: Pod "busybox-readonly-false-43619431-d97f-457d-936b-28bf167da2e4" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:41:23.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7914" for this suite.

• [SLOW TEST:30.868 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3824,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:41:24.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:41:24.772: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:41:26.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2461" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":231,"skipped":3852,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:41:26.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  4 09:41:27.280: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  4 09:41:29.820: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452487, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452487, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452487, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452487, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:41:32.058: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452487, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452487, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452487, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452487, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:41:33.824: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452487, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452487, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452487, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452487, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  4 09:41:37.311: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:41:37.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5312" for this suite.
STEP: Destroying namespace "webhook-5312-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.199 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":232,"skipped":3885,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:41:38.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Jul  4 09:41:40.660: INFO: created pod pod-service-account-defaultsa
Jul  4 09:41:40.660: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jul  4 09:41:40.718: INFO: created pod pod-service-account-mountsa
Jul  4 09:41:40.718: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jul  4 09:41:40.919: INFO: created pod pod-service-account-nomountsa
Jul  4 09:41:40.919: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jul  4 09:41:41.040: INFO: created pod pod-service-account-defaultsa-mountspec
Jul  4 09:41:41.040: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jul  4 09:41:41.048: INFO: created pod pod-service-account-mountsa-mountspec
Jul  4 09:41:41.048: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jul  4 09:41:41.094: INFO: created pod pod-service-account-nomountsa-mountspec
Jul  4 09:41:41.094: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jul  4 09:41:41.134: INFO: created pod pod-service-account-defaultsa-nomountspec
Jul  4 09:41:41.134: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jul  4 09:41:41.384: INFO: created pod pod-service-account-mountsa-nomountspec
Jul  4 09:41:41.384: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jul  4 09:41:41.760: INFO: created pod pod-service-account-nomountsa-nomountspec
Jul  4 09:41:41.760: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:41:41.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8168" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":233,"skipped":3912,"failed":0}
S
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:41:42.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:41:43.431: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jul  4 09:41:44.201: INFO: Pod name sample-pod: Found 0 pods out of 1
Jul  4 09:41:49.258: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul  4 09:41:59.669: INFO: Creating deployment "test-rolling-update-deployment"
Jul  4 09:41:59.732: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jul  4 09:42:00.631: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jul  4 09:42:02.899: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jul  4 09:42:02.929: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452521, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452521, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452521, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452520, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:42:05.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452521, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452521, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452521, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452520, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:42:07.169: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452521, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452521, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452521, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452520, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:42:09.493: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452521, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452521, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452528, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729452520, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 09:42:10.933: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jul  4 09:42:10.943: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-3799 /apis/apps/v1/namespaces/deployment-3799/deployments/test-rolling-update-deployment 52d8eab0-dcdc-4c74-a400-82b14a64f083 29774 1 2020-07-04 09:41:59 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a72bd8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-04 09:42:01 +0000 UTC,LastTransitionTime:2020-07-04 09:42:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-07-04 09:42:10 +0000 UTC,LastTransitionTime:2020-07-04 09:42:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jul  4 09:42:10.945: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-3799 /apis/apps/v1/namespaces/deployment-3799/replicasets/test-rolling-update-deployment-67cf4f6444 20b9a6be-b229-46da-b71d-9dba3fad0999 29759 1 2020-07-04 09:42:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 52d8eab0-dcdc-4c74-a400-82b14a64f083 0xc004085677 0xc004085678}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0040856e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jul  4 09:42:10.945: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jul  4 09:42:10.945: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-3799 /apis/apps/v1/namespaces/deployment-3799/replicasets/test-rolling-update-controller be2e19cf-f2a6-472c-bf73-e3cfae6b6369 29773 2 2020-07-04 09:41:43 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 52d8eab0-dcdc-4c74-a400-82b14a64f083 0xc0040855a7 0xc0040855a8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004085608  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul  4 09:42:10.948: INFO: Pod "test-rolling-update-deployment-67cf4f6444-6m259" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-6m259 test-rolling-update-deployment-67cf4f6444- deployment-3799 /api/v1/namespaces/deployment-3799/pods/test-rolling-update-deployment-67cf4f6444-6m259 258ab74a-91b0-4011-b1ad-6ea5727a6c45 29758 0 2020-07-04 09:42:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 20b9a6be-b229-46da-b71d-9dba3fad0999 0xc005570137 0xc005570138}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5m5m8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5m5m8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5m5m8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:42:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:42:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:42:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 09:42:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.176,StartTime:2020-07-04 09:42:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-04 09:42:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://1412eb6a67a2768fd018c236f847b51decd620d5e61d681190d753b134d3685c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.176,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:42:10.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3799" for this suite.

• [SLOW TEST:28.674 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":234,"skipped":3913,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:42:10.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul  4 09:42:11.410: INFO: Waiting up to 5m0s for pod "pod-9bfb2b88-b0d9-4100-aae8-b979abe1e543" in namespace "emptydir-3296" to be "success or failure"
Jul  4 09:42:11.502: INFO: Pod "pod-9bfb2b88-b0d9-4100-aae8-b979abe1e543": Phase="Pending", Reason="", readiness=false. Elapsed: 92.317062ms
Jul  4 09:42:13.506: INFO: Pod "pod-9bfb2b88-b0d9-4100-aae8-b979abe1e543": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09650821s
Jul  4 09:42:15.592: INFO: Pod "pod-9bfb2b88-b0d9-4100-aae8-b979abe1e543": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182393033s
Jul  4 09:42:17.595: INFO: Pod "pod-9bfb2b88-b0d9-4100-aae8-b979abe1e543": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.185539321s
STEP: Saw pod success
Jul  4 09:42:17.595: INFO: Pod "pod-9bfb2b88-b0d9-4100-aae8-b979abe1e543" satisfied condition "success or failure"
Jul  4 09:42:17.598: INFO: Trying to get logs from node jerma-worker pod pod-9bfb2b88-b0d9-4100-aae8-b979abe1e543 container test-container: 
STEP: delete the pod
Jul  4 09:42:17.645: INFO: Waiting for pod pod-9bfb2b88-b0d9-4100-aae8-b979abe1e543 to disappear
Jul  4 09:42:17.651: INFO: Pod pod-9bfb2b88-b0d9-4100-aae8-b979abe1e543 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:42:17.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3296" for this suite.

• [SLOW TEST:6.703 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3913,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:42:17.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Jul  4 09:42:17.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Jul  4 09:42:28.037: INFO: >>> kubeConfig: /root/.kube/config
Jul  4 09:42:30.953: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:42:40.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9718" for this suite.

• [SLOW TEST:22.669 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":236,"skipped":3933,"failed":0}
SSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:42:40.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:43:49.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6960" for this suite.

• [SLOW TEST:68.906 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3936,"failed":0}
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:43:49.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-4366
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-4366
STEP: Creating statefulset with conflicting port in namespace statefulset-4366
STEP: Waiting until pod test-pod will start running in namespace statefulset-4366
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4366
Jul  4 09:43:56.743: INFO: Observed stateful pod in namespace: statefulset-4366, name: ss-0, uid: b88d93ac-1675-49ce-b37b-c6dee5127945, status phase: Pending. Waiting for statefulset controller to delete.
Jul  4 09:43:56.884: INFO: Observed stateful pod in namespace: statefulset-4366, name: ss-0, uid: b88d93ac-1675-49ce-b37b-c6dee5127945, status phase: Failed. Waiting for statefulset controller to delete.
Jul  4 09:43:56.901: INFO: Observed stateful pod in namespace: statefulset-4366, name: ss-0, uid: b88d93ac-1675-49ce-b37b-c6dee5127945, status phase: Failed. Waiting for statefulset controller to delete.
Jul  4 09:43:57.150: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4366
STEP: Removing pod with conflicting port in namespace statefulset-4366
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4366 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jul  4 09:44:06.496: INFO: Deleting all statefulset in ns statefulset-4366
Jul  4 09:44:06.499: INFO: Scaling statefulset ss to 0
Jul  4 09:44:36.872: INFO: Waiting for statefulset status.replicas updated to 0
Jul  4 09:44:37.282: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:44:37.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4366" for this suite.

• [SLOW TEST:48.554 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":238,"skipped":3943,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:44:37.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:44:39.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:44:48.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7820" for this suite.

• [SLOW TEST:10.787 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3950,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:44:48.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Jul  4 09:44:48.622: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix405876308/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:44:48.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5127" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":240,"skipped":3962,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:44:48.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275
STEP: creating the pod
Jul  4 09:44:48.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6813'
Jul  4 09:44:52.095: INFO: stderr: ""
Jul  4 09:44:52.095: INFO: stdout: "pod/pause created\n"
Jul  4 09:44:52.095: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jul  4 09:44:52.096: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6813" to be "running and ready"
Jul  4 09:44:52.116: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 20.851454ms
Jul  4 09:44:54.500: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.404306504s
Jul  4 09:44:57.148: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.052477987s
Jul  4 09:44:59.152: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.056416005s
Jul  4 09:45:01.155: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 9.059294344s
Jul  4 09:45:03.164: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 11.068783544s
Jul  4 09:45:05.170: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 13.074953588s
Jul  4 09:45:05.171: INFO: Pod "pause" satisfied condition "running and ready"
Jul  4 09:45:05.171: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Jul  4 09:45:05.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6813'
Jul  4 09:45:05.274: INFO: stderr: ""
Jul  4 09:45:05.274: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jul  4 09:45:05.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6813'
Jul  4 09:45:05.359: INFO: stderr: ""
Jul  4 09:45:05.359: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          13s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Jul  4 09:45:05.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6813'
Jul  4 09:45:05.436: INFO: stderr: ""
Jul  4 09:45:05.436: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jul  4 09:45:05.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6813'
Jul  4 09:45:05.544: INFO: stderr: ""
Jul  4 09:45:05.544: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          13s   \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282
STEP: using delete to clean up resources
Jul  4 09:45:05.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6813'
Jul  4 09:45:05.742: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  4 09:45:05.742: INFO: stdout: "pod \"pause\" force deleted\n"
Jul  4 09:45:05.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6813'
Jul  4 09:45:05.965: INFO: stderr: "No resources found in kubectl-6813 namespace.\n"
Jul  4 09:45:05.965: INFO: stdout: ""
Jul  4 09:45:05.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6813 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  4 09:45:06.113: INFO: stderr: ""
Jul  4 09:45:06.113: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:45:06.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6813" for this suite.

• [SLOW TEST:17.422 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":241,"skipped":3967,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:45:06.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  4 09:45:06.399: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7ad32430-2671-428e-b4f2-26323cea36bb" in namespace "downward-api-6350" to be "success or failure"
Jul  4 09:45:06.404: INFO: Pod "downwardapi-volume-7ad32430-2671-428e-b4f2-26323cea36bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238119ms
Jul  4 09:45:08.474: INFO: Pod "downwardapi-volume-7ad32430-2671-428e-b4f2-26323cea36bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074180391s
Jul  4 09:45:10.477: INFO: Pod "downwardapi-volume-7ad32430-2671-428e-b4f2-26323cea36bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077316126s
STEP: Saw pod success
Jul  4 09:45:10.477: INFO: Pod "downwardapi-volume-7ad32430-2671-428e-b4f2-26323cea36bb" satisfied condition "success or failure"
Jul  4 09:45:10.479: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-7ad32430-2671-428e-b4f2-26323cea36bb container client-container: 
STEP: delete the pod
Jul  4 09:45:10.757: INFO: Waiting for pod downwardapi-volume-7ad32430-2671-428e-b4f2-26323cea36bb to disappear
Jul  4 09:45:10.824: INFO: Pod downwardapi-volume-7ad32430-2671-428e-b4f2-26323cea36bb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:45:10.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6350" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3978,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:45:10.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul  4 09:45:11.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1877'
Jul  4 09:45:11.109: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  4 09:45:11.109: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686
Jul  4 09:45:11.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-1877'
Jul  4 09:45:11.308: INFO: stderr: ""
Jul  4 09:45:11.308: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:45:11.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1877" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":278,"completed":243,"skipped":3981,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:45:11.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-842.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-842.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-842.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-842.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  4 09:45:19.748: INFO: DNS probes using dns-test-35b0dac9-d6b2-4afb-95e9-7ed175450bdd succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-842.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-842.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-842.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-842.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  4 09:46:02.291: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-842.svc.cluster.local from pod dns-842/dns-test-9e9fbeab-17f2-42b9-80f6-dbc74f6a7f1d: the server could not find the requested resource (get pods dns-test-9e9fbeab-17f2-42b9-80f6-dbc74f6a7f1d)
Jul  4 09:46:02.293: INFO: Lookups using dns-842/dns-test-9e9fbeab-17f2-42b9-80f6-dbc74f6a7f1d failed for: [wheezy_udp@dns-test-service-3.dns-842.svc.cluster.local]

Jul  4 09:46:07.747: INFO: DNS probes using dns-test-9e9fbeab-17f2-42b9-80f6-dbc74f6a7f1d succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-842.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-842.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-842.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-842.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  4 09:46:42.392: INFO: DNS probes using dns-test-fd13bea0-cf68-4c98-9ca1-f2853037ac5a succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:46:44.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-842" for this suite.

• [SLOW TEST:92.837 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":244,"skipped":3994,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:46:44.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:46:48.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1649" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4011,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:46:48.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Jul  4 09:46:52.843: INFO: Pod pod-hostip-75338deb-c9fd-4592-8122-86661a073f13 has hostIP: 172.17.0.10
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:46:52.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-371" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4030,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:46:52.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6208
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Jul  4 09:46:52.943: INFO: Found 0 stateful pods, waiting for 3
Jul  4 09:47:03.470: INFO: Found 2 stateful pods, waiting for 3
Jul  4 09:47:13.222: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  4 09:47:13.223: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  4 09:47:13.223: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul  4 09:47:22.947: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  4 09:47:22.947: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  4 09:47:22.947: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jul  4 09:47:22.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6208 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  4 09:47:23.803: INFO: stderr: "I0704 09:47:23.078134    4644 log.go:172] (0xc0001051e0) (0xc0006cbc20) Create stream\nI0704 09:47:23.078192    4644 log.go:172] (0xc0001051e0) (0xc0006cbc20) Stream added, broadcasting: 1\nI0704 09:47:23.080577    4644 log.go:172] (0xc0001051e0) Reply frame received for 1\nI0704 09:47:23.080632    4644 log.go:172] (0xc0001051e0) (0xc0006cbe00) Create stream\nI0704 09:47:23.080652    4644 log.go:172] (0xc0001051e0) (0xc0006cbe00) Stream added, broadcasting: 3\nI0704 09:47:23.081847    4644 log.go:172] (0xc0001051e0) Reply frame received for 3\nI0704 09:47:23.081902    4644 log.go:172] (0xc0001051e0) (0xc0009fc000) Create stream\nI0704 09:47:23.081922    4644 log.go:172] (0xc0001051e0) (0xc0009fc000) Stream added, broadcasting: 5\nI0704 09:47:23.082792    4644 log.go:172] (0xc0001051e0) Reply frame received for 5\nI0704 09:47:23.150945    4644 log.go:172] (0xc0001051e0) Data frame received for 5\nI0704 09:47:23.150961    4644 log.go:172] (0xc0009fc000) (5) Data frame handling\nI0704 09:47:23.150968    4644 log.go:172] (0xc0009fc000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0704 09:47:23.796829    4644 log.go:172] (0xc0001051e0) Data frame received for 3\nI0704 09:47:23.796859    4644 log.go:172] (0xc0006cbe00) (3) Data frame handling\nI0704 09:47:23.796871    4644 log.go:172] (0xc0006cbe00) (3) Data frame sent\nI0704 09:47:23.797581    4644 log.go:172] (0xc0001051e0) Data frame received for 3\nI0704 09:47:23.797615    4644 log.go:172] (0xc0006cbe00) (3) Data frame handling\nI0704 09:47:23.797640    4644 log.go:172] (0xc0001051e0) Data frame received for 5\nI0704 09:47:23.797688    4644 log.go:172] (0xc0009fc000) (5) Data frame handling\nI0704 09:47:23.799906    4644 log.go:172] (0xc0001051e0) Data frame received for 1\nI0704 09:47:23.799927    4644 log.go:172] (0xc0006cbc20) (1) Data frame handling\nI0704 09:47:23.799941    4644 log.go:172] (0xc0006cbc20) (1) Data frame sent\nI0704 09:47:23.799954    4644 log.go:172] (0xc0001051e0) (0xc0006cbc20) Stream removed, broadcasting: 1\nI0704 09:47:23.800113    4644 log.go:172] (0xc0001051e0) Go away received\nI0704 09:47:23.800258    4644 log.go:172] (0xc0001051e0) (0xc0006cbc20) Stream removed, broadcasting: 1\nI0704 09:47:23.800277    4644 log.go:172] (0xc0001051e0) (0xc0006cbe00) Stream removed, broadcasting: 3\nI0704 09:47:23.800284    4644 log.go:172] (0xc0001051e0) (0xc0009fc000) Stream removed, broadcasting: 5\n"
Jul  4 09:47:23.803: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  4 09:47:23.803: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jul  4 09:47:35.235: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jul  4 09:47:46.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6208 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:47:47.106: INFO: stderr: "I0704 09:47:47.055259    4666 log.go:172] (0xc000982000) (0xc0009f0000) Create stream\nI0704 09:47:47.055304    4666 log.go:172] (0xc000982000) (0xc0009f0000) Stream added, broadcasting: 1\nI0704 09:47:47.056964    4666 log.go:172] (0xc000982000) Reply frame received for 1\nI0704 09:47:47.057000    4666 log.go:172] (0xc000982000) (0xc00066dae0) Create stream\nI0704 09:47:47.057009    4666 log.go:172] (0xc000982000) (0xc00066dae0) Stream added, broadcasting: 3\nI0704 09:47:47.057688    4666 log.go:172] (0xc000982000) Reply frame received for 3\nI0704 09:47:47.057726    4666 log.go:172] (0xc000982000) (0xc0009f00a0) Create stream\nI0704 09:47:47.057740    4666 log.go:172] (0xc000982000) (0xc0009f00a0) Stream added, broadcasting: 5\nI0704 09:47:47.058278    4666 log.go:172] (0xc000982000) Reply frame received for 5\nI0704 09:47:47.101055    4666 log.go:172] (0xc000982000) Data frame received for 5\nI0704 09:47:47.101072    4666 log.go:172] (0xc0009f00a0) (5) Data frame handling\nI0704 09:47:47.101082    4666 log.go:172] (0xc0009f00a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0704 09:47:47.101101    4666 log.go:172] (0xc000982000) Data frame received for 3\nI0704 09:47:47.101247    4666 log.go:172] (0xc00066dae0) (3) Data frame handling\nI0704 09:47:47.101272    4666 log.go:172] (0xc00066dae0) (3) Data frame sent\nI0704 09:47:47.101293    4666 log.go:172] (0xc000982000) Data frame received for 3\nI0704 09:47:47.101314    4666 log.go:172] (0xc00066dae0) (3) Data frame handling\nI0704 09:47:47.101332    4666 log.go:172] (0xc000982000) Data frame received for 5\nI0704 09:47:47.101346    4666 log.go:172] (0xc0009f00a0) (5) Data frame handling\nI0704 09:47:47.102258    4666 log.go:172] (0xc000982000) Data frame received for 1\nI0704 09:47:47.102275    4666 log.go:172] (0xc0009f0000) (1) Data frame handling\nI0704 09:47:47.102284    4666 log.go:172] (0xc0009f0000) (1) Data frame sent\nI0704 09:47:47.102380    4666 log.go:172] (0xc000982000) (0xc0009f0000) Stream removed, broadcasting: 1\nI0704 09:47:47.102421    4666 log.go:172] (0xc000982000) Go away received\nI0704 09:47:47.102769    4666 log.go:172] (0xc000982000) (0xc0009f0000) Stream removed, broadcasting: 1\nI0704 09:47:47.102786    4666 log.go:172] (0xc000982000) (0xc00066dae0) Stream removed, broadcasting: 3\nI0704 09:47:47.102797    4666 log.go:172] (0xc000982000) (0xc0009f00a0) Stream removed, broadcasting: 5\n"
Jul  4 09:47:47.106: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  4 09:47:47.106: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  4 09:47:57.178: INFO: Waiting for StatefulSet statefulset-6208/ss2 to complete update
Jul  4 09:47:57.178: INFO: Waiting for Pod statefulset-6208/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:47:57.178: INFO: Waiting for Pod statefulset-6208/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:47:57.178: INFO: Waiting for Pod statefulset-6208/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:48:07.184: INFO: Waiting for StatefulSet statefulset-6208/ss2 to complete update
Jul  4 09:48:07.184: INFO: Waiting for Pod statefulset-6208/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:48:07.184: INFO: Waiting for Pod statefulset-6208/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:48:17.185: INFO: Waiting for StatefulSet statefulset-6208/ss2 to complete update
Jul  4 09:48:17.185: INFO: Waiting for Pod statefulset-6208/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:48:17.185: INFO: Waiting for Pod statefulset-6208/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:48:27.926: INFO: Waiting for StatefulSet statefulset-6208/ss2 to complete update
Jul  4 09:48:27.926: INFO: Waiting for Pod statefulset-6208/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:48:27.926: INFO: Waiting for Pod statefulset-6208/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:48:37.187: INFO: Waiting for StatefulSet statefulset-6208/ss2 to complete update
Jul  4 09:48:37.187: INFO: Waiting for Pod statefulset-6208/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:48:37.187: INFO: Waiting for Pod statefulset-6208/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:48:47.185: INFO: Waiting for StatefulSet statefulset-6208/ss2 to complete update
Jul  4 09:48:47.185: INFO: Waiting for Pod statefulset-6208/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:48:47.185: INFO: Waiting for Pod statefulset-6208/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:48:57.184: INFO: Waiting for StatefulSet statefulset-6208/ss2 to complete update
Jul  4 09:48:57.184: INFO: Waiting for Pod statefulset-6208/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:49:07.248: INFO: Waiting for StatefulSet statefulset-6208/ss2 to complete update
Jul  4 09:49:07.248: INFO: Waiting for Pod statefulset-6208/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:49:17.185: INFO: Waiting for StatefulSet statefulset-6208/ss2 to complete update
STEP: Rolling back to a previous revision
Jul  4 09:49:27.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6208 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  4 09:49:27.393: INFO: stderr: "I0704 09:49:27.291463    4686 log.go:172] (0xc000a5f1e0) (0xc000a56140) Create stream\nI0704 09:49:27.291518    4686 log.go:172] (0xc000a5f1e0) (0xc000a56140) Stream added, broadcasting: 1\nI0704 09:49:27.293949    4686 log.go:172] (0xc000a5f1e0) Reply frame received for 1\nI0704 09:49:27.293987    4686 log.go:172] (0xc000a5f1e0) (0xc0009cc000) Create stream\nI0704 09:49:27.294003    4686 log.go:172] (0xc000a5f1e0) (0xc0009cc000) Stream added, broadcasting: 3\nI0704 09:49:27.294827    4686 log.go:172] (0xc000a5f1e0) Reply frame received for 3\nI0704 09:49:27.294860    4686 log.go:172] (0xc000a5f1e0) (0xc000a56460) Create stream\nI0704 09:49:27.294873    4686 log.go:172] (0xc000a5f1e0) (0xc000a56460) Stream added, broadcasting: 5\nI0704 09:49:27.295693    4686 log.go:172] (0xc000a5f1e0) Reply frame received for 5\nI0704 09:49:27.362174    4686 log.go:172] (0xc000a5f1e0) Data frame received for 5\nI0704 09:49:27.362194    4686 log.go:172] (0xc000a56460) (5) Data frame handling\nI0704 09:49:27.362207    4686 log.go:172] (0xc000a56460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0704 09:49:27.387300    4686 log.go:172] (0xc000a5f1e0) Data frame received for 3\nI0704 09:49:27.387333    4686 log.go:172] (0xc0009cc000) (3) Data frame handling\nI0704 09:49:27.387373    4686 log.go:172] (0xc0009cc000) (3) Data frame sent\nI0704 09:49:27.387470    4686 log.go:172] (0xc000a5f1e0) Data frame received for 3\nI0704 09:49:27.387487    4686 log.go:172] (0xc0009cc000) (3) Data frame handling\nI0704 09:49:27.387626    4686 log.go:172] (0xc000a5f1e0) Data frame received for 5\nI0704 09:49:27.387652    4686 log.go:172] (0xc000a56460) (5) Data frame handling\nI0704 09:49:27.388883    4686 log.go:172] (0xc000a5f1e0) Data frame received for 1\nI0704 09:49:27.388914    4686 log.go:172] (0xc000a56140) (1) Data frame handling\nI0704 09:49:27.388942    4686 log.go:172] (0xc000a56140) (1) Data frame sent\nI0704 09:49:27.388972    4686 log.go:172] (0xc000a5f1e0) (0xc000a56140) Stream removed, broadcasting: 1\nI0704 09:49:27.389001    4686 log.go:172] (0xc000a5f1e0) Go away received\nI0704 09:49:27.389656    4686 log.go:172] (0xc000a5f1e0) (0xc000a56140) Stream removed, broadcasting: 1\nI0704 09:49:27.389681    4686 log.go:172] (0xc000a5f1e0) (0xc0009cc000) Stream removed, broadcasting: 3\nI0704 09:49:27.389692    4686 log.go:172] (0xc000a5f1e0) (0xc000a56460) Stream removed, broadcasting: 5\n"
Jul  4 09:49:27.394: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  4 09:49:27.394: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  4 09:49:37.591: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jul  4 09:49:48.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6208 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  4 09:49:49.068: INFO: stderr: "I0704 09:49:48.940729    4704 log.go:172] (0xc000b66210) (0xc000bd46e0) Create stream\nI0704 09:49:48.940780    4704 log.go:172] (0xc000b66210) (0xc000bd46e0) Stream added, broadcasting: 1\nI0704 09:49:48.944921    4704 log.go:172] (0xc000b66210) Reply frame received for 1\nI0704 09:49:48.944951    4704 log.go:172] (0xc000b66210) (0xc00052d5e0) Create stream\nI0704 09:49:48.944962    4704 log.go:172] (0xc000b66210) (0xc00052d5e0) Stream added, broadcasting: 3\nI0704 09:49:48.946408    4704 log.go:172] (0xc000b66210) Reply frame received for 3\nI0704 09:49:48.946458    4704 log.go:172] (0xc000b66210) (0xc00082fc20) Create stream\nI0704 09:49:48.946477    4704 log.go:172] (0xc000b66210) (0xc00082fc20) Stream added, broadcasting: 5\nI0704 09:49:48.947186    4704 log.go:172] (0xc000b66210) Reply frame received for 5\nI0704 09:49:49.028272    4704 log.go:172] (0xc000b66210) Data frame received for 5\nI0704 09:49:49.028300    4704 log.go:172] (0xc00082fc20) (5) Data frame handling\nI0704 09:49:49.028319    4704 log.go:172] (0xc00082fc20) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0704 09:49:49.060837    4704 log.go:172] (0xc000b66210) Data frame received for 3\nI0704 09:49:49.060880    4704 log.go:172] (0xc00052d5e0) (3) Data frame handling\nI0704 09:49:49.061004    4704 log.go:172] (0xc00052d5e0) (3) Data frame sent\nI0704 09:49:49.061628    4704 log.go:172] (0xc000b66210) Data frame received for 3\nI0704 09:49:49.061651    4704 log.go:172] (0xc00052d5e0) (3) Data frame handling\nI0704 09:49:49.061666    4704 log.go:172] (0xc000b66210) Data frame received for 5\nI0704 09:49:49.061672    4704 log.go:172] (0xc00082fc20) (5) Data frame handling\nI0704 09:49:49.062981    4704 log.go:172] (0xc000b66210) Data frame received for 1\nI0704 09:49:49.063003    4704 log.go:172] (0xc000bd46e0) (1) Data frame handling\nI0704 09:49:49.063026    4704 log.go:172] (0xc000bd46e0) (1) Data frame sent\nI0704 09:49:49.063041    4704 log.go:172] (0xc000b66210) (0xc000bd46e0) Stream removed, broadcasting: 1\nI0704 09:49:49.063059    4704 log.go:172] (0xc000b66210) Go away received\nI0704 09:49:49.063565    4704 log.go:172] (0xc000b66210) (0xc000bd46e0) Stream removed, broadcasting: 1\nI0704 09:49:49.063593    4704 log.go:172] (0xc000b66210) (0xc00052d5e0) Stream removed, broadcasting: 3\nI0704 09:49:49.063614    4704 log.go:172] (0xc000b66210) (0xc00082fc20) Stream removed, broadcasting: 5\n"
Jul  4 09:49:49.068: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  4 09:49:49.068: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  4 09:49:59.232: INFO: Waiting for StatefulSet statefulset-6208/ss2 to complete update
Jul  4 09:49:59.232: INFO: Waiting for Pod statefulset-6208/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul  4 09:49:59.232: INFO: Waiting for Pod statefulset-6208/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul  4 09:50:09.239: INFO: Waiting for StatefulSet statefulset-6208/ss2 to complete update
Jul  4 09:50:09.239: INFO: Waiting for Pod statefulset-6208/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul  4 09:50:09.239: INFO: Waiting for Pod statefulset-6208/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul  4 09:50:19.239: INFO: Waiting for StatefulSet statefulset-6208/ss2 to complete update
Jul  4 09:50:19.239: INFO: Waiting for Pod statefulset-6208/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jul  4 09:50:29.238: INFO: Deleting all statefulset in ns statefulset-6208
Jul  4 09:50:29.241: INFO: Scaling statefulset ss2 to 0
Jul  4 09:50:59.264: INFO: Waiting for statefulset status.replicas updated to 0
Jul  4 09:50:59.267: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:50:59.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6208" for this suite.

• [SLOW TEST:246.420 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":247,"skipped":4040,"failed":0}
S
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:50:59.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:51:06.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3247" for this suite.

• [SLOW TEST:7.180 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":248,"skipped":4041,"failed":0}
S
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:51:06.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jul  4 09:51:07.615: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1248 /api/v1/namespaces/watch-1248/configmaps/e2e-watch-test-label-changed 7258d81c-fce9-4284-99b1-636a70ad0c65 32245 0 2020-07-04 09:51:07 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  4 09:51:07.615: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1248 /api/v1/namespaces/watch-1248/configmaps/e2e-watch-test-label-changed 7258d81c-fce9-4284-99b1-636a70ad0c65 32246 0 2020-07-04 09:51:07 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jul  4 09:51:07.615: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1248 /api/v1/namespaces/watch-1248/configmaps/e2e-watch-test-label-changed 7258d81c-fce9-4284-99b1-636a70ad0c65 32248 0 2020-07-04 09:51:07 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jul  4 09:51:18.616: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1248 /api/v1/namespaces/watch-1248/configmaps/e2e-watch-test-label-changed 7258d81c-fce9-4284-99b1-636a70ad0c65 32286 0 2020-07-04 09:51:07 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul  4 09:51:18.616: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1248 /api/v1/namespaces/watch-1248/configmaps/e2e-watch-test-label-changed 7258d81c-fce9-4284-99b1-636a70ad0c65 32288 0 2020-07-04 09:51:07 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jul  4 09:51:18.616: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1248 /api/v1/namespaces/watch-1248/configmaps/e2e-watch-test-label-changed 7258d81c-fce9-4284-99b1-636a70ad0c65 32289 0 2020-07-04 09:51:07 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:51:18.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1248" for this suite.

• [SLOW TEST:12.175 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":249,"skipped":4042,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:51:18.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  4 09:51:18.840: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b985c32-fb69-4f2b-aedc-80a1f6aa2c57" in namespace "projected-4920" to be "success or failure"
Jul  4 09:51:19.643: INFO: Pod "downwardapi-volume-6b985c32-fb69-4f2b-aedc-80a1f6aa2c57": Phase="Pending", Reason="", readiness=false. Elapsed: 803.360714ms
Jul  4 09:51:21.841: INFO: Pod "downwardapi-volume-6b985c32-fb69-4f2b-aedc-80a1f6aa2c57": Phase="Pending", Reason="", readiness=false. Elapsed: 3.000678599s
Jul  4 09:51:24.143: INFO: Pod "downwardapi-volume-6b985c32-fb69-4f2b-aedc-80a1f6aa2c57": Phase="Pending", Reason="", readiness=false. Elapsed: 5.302912381s
Jul  4 09:51:27.641: INFO: Pod "downwardapi-volume-6b985c32-fb69-4f2b-aedc-80a1f6aa2c57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.801243532s
STEP: Saw pod success
Jul  4 09:51:27.641: INFO: Pod "downwardapi-volume-6b985c32-fb69-4f2b-aedc-80a1f6aa2c57" satisfied condition "success or failure"
Jul  4 09:51:31.037: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-6b985c32-fb69-4f2b-aedc-80a1f6aa2c57 container client-container: 
STEP: delete the pod
Jul  4 09:51:37.918: INFO: Waiting for pod downwardapi-volume-6b985c32-fb69-4f2b-aedc-80a1f6aa2c57 to disappear
Jul  4 09:51:37.921: INFO: Pod downwardapi-volume-6b985c32-fb69-4f2b-aedc-80a1f6aa2c57 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:51:37.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4920" for this suite.

• [SLOW TEST:20.729 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4049,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:51:39.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6201
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Jul  4 09:51:43.816: INFO: Found 0 stateful pods, waiting for 3
Jul  4 09:51:53.864: INFO: Found 1 stateful pods, waiting for 3
Jul  4 09:52:05.698: INFO: Found 1 stateful pods, waiting for 3
Jul  4 09:52:13.828: INFO: Found 1 stateful pods, waiting for 3
Jul  4 09:52:24.144: INFO: Found 2 stateful pods, waiting for 3
Jul  4 09:52:34.445: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  4 09:52:34.445: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  4 09:52:34.445: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul  4 09:52:43.824: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  4 09:52:43.824: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  4 09:52:43.824: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jul  4 09:52:43.845: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jul  4 09:52:55.769: INFO: Updating stateful set ss2
Jul  4 09:52:55.774: INFO: Waiting for Pod statefulset-6201/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:53:06.090: INFO: Waiting for Pod statefulset-6201/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jul  4 09:53:19.955: INFO: Found 2 stateful pods, waiting for 3
Jul  4 09:53:30.199: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  4 09:53:30.199: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  4 09:53:30.199: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul  4 09:53:40.154: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  4 09:53:40.154: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  4 09:53:40.154: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jul  4 09:53:40.179: INFO: Updating stateful set ss2
Jul  4 09:53:40.350: INFO: Waiting for Pod statefulset-6201/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:53:52.339: INFO: Updating stateful set ss2
Jul  4 09:53:54.142: INFO: Waiting for StatefulSet statefulset-6201/ss2 to complete update
Jul  4 09:53:54.142: INFO: Waiting for Pod statefulset-6201/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:54:04.185: INFO: Waiting for StatefulSet statefulset-6201/ss2 to complete update
Jul  4 09:54:04.185: INFO: Waiting for Pod statefulset-6201/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:54:15.047: INFO: Waiting for StatefulSet statefulset-6201/ss2 to complete update
Jul  4 09:54:15.047: INFO: Waiting for Pod statefulset-6201/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  4 09:54:24.191: INFO: Waiting for StatefulSet statefulset-6201/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jul  4 09:54:34.149: INFO: Deleting all statefulset in ns statefulset-6201
Jul  4 09:54:34.151: INFO: Scaling statefulset ss2 to 0
Jul  4 09:55:04.249: INFO: Waiting for statefulset status.replicas updated to 0
Jul  4 09:55:04.251: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:55:04.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6201" for this suite.

• [SLOW TEST:205.031 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":251,"skipped":4093,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:55:04.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jul  4 09:55:19.348: INFO: Successfully updated pod "labelsupdate3294a3a7-4cdf-4b7c-99a1-ea9f9d7800f8"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:55:21.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-524" for this suite.

• [SLOW TEST:17.005 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4132,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:55:21.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-b311b437-f3f8-49a6-80f6-b872fa52749d
STEP: Creating secret with name secret-projected-all-test-volume-e9fe63a4-283f-4d93-aad9-c93f2758b811
STEP: Creating a pod to test Check all projections for projected volume plugin
Jul  4 09:55:21.633: INFO: Waiting up to 5m0s for pod "projected-volume-d2ab0ec5-b037-43f2-a322-4c8b0bbcb7c1" in namespace "projected-6075" to be "success or failure"
Jul  4 09:55:21.635: INFO: Pod "projected-volume-d2ab0ec5-b037-43f2-a322-4c8b0bbcb7c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321336ms
Jul  4 09:55:23.735: INFO: Pod "projected-volume-d2ab0ec5-b037-43f2-a322-4c8b0bbcb7c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101761461s
Jul  4 09:55:25.738: INFO: Pod "projected-volume-d2ab0ec5-b037-43f2-a322-4c8b0bbcb7c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105558179s
Jul  4 09:55:27.789: INFO: Pod "projected-volume-d2ab0ec5-b037-43f2-a322-4c8b0bbcb7c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15611208s
Jul  4 09:55:30.039: INFO: Pod "projected-volume-d2ab0ec5-b037-43f2-a322-4c8b0bbcb7c1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.406494092s
Jul  4 09:55:32.339: INFO: Pod "projected-volume-d2ab0ec5-b037-43f2-a322-4c8b0bbcb7c1": Phase="Running", Reason="", readiness=true. Elapsed: 10.705972063s
Jul  4 09:55:34.963: INFO: Pod "projected-volume-d2ab0ec5-b037-43f2-a322-4c8b0bbcb7c1": Phase="Running", Reason="", readiness=true. Elapsed: 13.329917669s
Jul  4 09:55:36.966: INFO: Pod "projected-volume-d2ab0ec5-b037-43f2-a322-4c8b0bbcb7c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.333008453s
STEP: Saw pod success
Jul  4 09:55:36.966: INFO: Pod "projected-volume-d2ab0ec5-b037-43f2-a322-4c8b0bbcb7c1" satisfied condition "success or failure"
Jul  4 09:55:36.968: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-d2ab0ec5-b037-43f2-a322-4c8b0bbcb7c1 container projected-all-volume-test: 
STEP: delete the pod
Jul  4 09:55:37.095: INFO: Waiting for pod projected-volume-d2ab0ec5-b037-43f2-a322-4c8b0bbcb7c1 to disappear
Jul  4 09:55:37.117: INFO: Pod projected-volume-d2ab0ec5-b037-43f2-a322-4c8b0bbcb7c1 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:55:37.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6075" for this suite.

• [SLOW TEST:15.713 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4143,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:55:37.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:55:54.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2077" for this suite.

• [SLOW TEST:17.148 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":254,"skipped":4148,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:55:54.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  4 09:55:54.410: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f9a404ea-8595-4844-a4a5-7588c85520b1" in namespace "downward-api-9907" to be "success or failure"
Jul  4 09:55:54.448: INFO: Pod "downwardapi-volume-f9a404ea-8595-4844-a4a5-7588c85520b1": Phase="Pending", Reason="", readiness=false. Elapsed: 37.895107ms
Jul  4 09:55:56.452: INFO: Pod "downwardapi-volume-f9a404ea-8595-4844-a4a5-7588c85520b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041756234s
Jul  4 09:55:58.456: INFO: Pod "downwardapi-volume-f9a404ea-8595-4844-a4a5-7588c85520b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045847708s
Jul  4 09:56:00.501: INFO: Pod "downwardapi-volume-f9a404ea-8595-4844-a4a5-7588c85520b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.09072531s
STEP: Saw pod success
Jul  4 09:56:00.501: INFO: Pod "downwardapi-volume-f9a404ea-8595-4844-a4a5-7588c85520b1" satisfied condition "success or failure"
Jul  4 09:56:00.525: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f9a404ea-8595-4844-a4a5-7588c85520b1 container client-container: 
STEP: delete the pod
Jul  4 09:56:01.001: INFO: Waiting for pod downwardapi-volume-f9a404ea-8595-4844-a4a5-7588c85520b1 to disappear
Jul  4 09:56:01.015: INFO: Pod downwardapi-volume-f9a404ea-8595-4844-a4a5-7588c85520b1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:56:01.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9907" for this suite.

• [SLOW TEST:6.749 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4176,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:56:01.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:56:01.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1804" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":256,"skipped":4199,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:56:01.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 09:56:01.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Jul  4 09:56:02.083: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-04T09:56:01Z generation:1 name:name1 resourceVersion:33742 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:b57185ef-a4a8-42e7-a797-f9b79a23a9d8] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Jul  4 09:56:12.088: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-04T09:56:12Z generation:1 name:name2 resourceVersion:33785 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:14a95b01-fdb9-4f67-ae46-a71a58a6d5d2] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Jul  4 09:56:22.092: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-04T09:56:01Z generation:2 name:name1 resourceVersion:33828 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:b57185ef-a4a8-42e7-a797-f9b79a23a9d8] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Jul  4 09:56:32.295: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-04T09:56:12Z generation:2 name:name2 resourceVersion:33854 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:14a95b01-fdb9-4f67-ae46-a71a58a6d5d2] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Jul  4 09:56:42.974: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-04T09:56:01Z generation:2 name:name1 resourceVersion:33881 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:b57185ef-a4a8-42e7-a797-f9b79a23a9d8] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Jul  4 09:56:52.982: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-04T09:56:12Z generation:2 name:name2 resourceVersion:33907 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:14a95b01-fdb9-4f67-ae46-a71a58a6d5d2] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:57:03.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-3768" for this suite.

• [SLOW TEST:62.258 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":257,"skipped":4224,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:57:03.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:57:15.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6188" for this suite.

• [SLOW TEST:12.051 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":258,"skipped":4249,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:57:15.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jul  4 09:57:25.558: INFO: Pod name wrapped-volume-race-f225418f-839b-45b7-afcd-788210c612e2: Found 0 pods out of 5
Jul  4 09:57:30.599: INFO: Pod name wrapped-volume-race-f225418f-839b-45b7-afcd-788210c612e2: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-f225418f-839b-45b7-afcd-788210c612e2 in namespace emptydir-wrapper-2177, will wait for the garbage collector to delete the pods
Jul  4 09:57:50.970: INFO: Deleting ReplicationController wrapped-volume-race-f225418f-839b-45b7-afcd-788210c612e2 took: 14.32059ms
Jul  4 09:57:51.370: INFO: Terminating ReplicationController wrapped-volume-race-f225418f-839b-45b7-afcd-788210c612e2 pods took: 400.257483ms
STEP: Creating RC which spawns configmap-volume pods
Jul  4 09:58:08.009: INFO: Pod name wrapped-volume-race-4024e2d9-3890-409d-9ca4-ab0cf40960a6: Found 0 pods out of 5
Jul  4 09:58:13.177: INFO: Pod name wrapped-volume-race-4024e2d9-3890-409d-9ca4-ab0cf40960a6: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-4024e2d9-3890-409d-9ca4-ab0cf40960a6 in namespace emptydir-wrapper-2177, will wait for the garbage collector to delete the pods
Jul  4 09:58:35.761: INFO: Deleting ReplicationController wrapped-volume-race-4024e2d9-3890-409d-9ca4-ab0cf40960a6 took: 69.295841ms
Jul  4 09:58:36.161: INFO: Terminating ReplicationController wrapped-volume-race-4024e2d9-3890-409d-9ca4-ab0cf40960a6 pods took: 400.238498ms
STEP: Creating RC which spawns configmap-volume pods
Jul  4 09:59:16.703: INFO: Pod name wrapped-volume-race-353b2e47-48aa-4b38-87f8-25e02fb23352: Found 0 pods out of 5
Jul  4 09:59:21.709: INFO: Pod name wrapped-volume-race-353b2e47-48aa-4b38-87f8-25e02fb23352: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-353b2e47-48aa-4b38-87f8-25e02fb23352 in namespace emptydir-wrapper-2177, will wait for the garbage collector to delete the pods
Jul  4 09:59:35.869: INFO: Deleting ReplicationController wrapped-volume-race-353b2e47-48aa-4b38-87f8-25e02fb23352 took: 6.842571ms
Jul  4 09:59:36.269: INFO: Terminating ReplicationController wrapped-volume-race-353b2e47-48aa-4b38-87f8-25e02fb23352 pods took: 400.200352ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:59:49.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2177" for this suite.

• [SLOW TEST:153.536 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":259,"skipped":4260,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:59:49.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  4 09:59:49.185: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a69c85bc-2b07-4c5c-97dd-78b376144c22" in namespace "downward-api-5562" to be "success or failure"
Jul  4 09:59:49.189: INFO: Pod "downwardapi-volume-a69c85bc-2b07-4c5c-97dd-78b376144c22": Phase="Pending", Reason="", readiness=false. Elapsed: 3.824194ms
Jul  4 09:59:51.244: INFO: Pod "downwardapi-volume-a69c85bc-2b07-4c5c-97dd-78b376144c22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058234342s
Jul  4 09:59:53.247: INFO: Pod "downwardapi-volume-a69c85bc-2b07-4c5c-97dd-78b376144c22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060991843s
STEP: Saw pod success
Jul  4 09:59:53.247: INFO: Pod "downwardapi-volume-a69c85bc-2b07-4c5c-97dd-78b376144c22" satisfied condition "success or failure"
Jul  4 09:59:53.248: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a69c85bc-2b07-4c5c-97dd-78b376144c22 container client-container: 
STEP: delete the pod
Jul  4 09:59:53.344: INFO: Waiting for pod downwardapi-volume-a69c85bc-2b07-4c5c-97dd-78b376144c22 to disappear
Jul  4 09:59:53.369: INFO: Pod downwardapi-volume-a69c85bc-2b07-4c5c-97dd-78b376144c22 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 09:59:53.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5562" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4284,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 09:59:53.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-5370
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-5370
STEP: Deleting pre-stop pod
Jul  4 10:00:09.089: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 10:00:09.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-5370" for this suite.

• [SLOW TEST:15.901 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":261,"skipped":4307,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 10:00:09.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-24d0517a-0fd0-4254-b1d0-f82dde7155b0
STEP: Creating a pod to test consume secrets
Jul  4 10:00:10.619: INFO: Waiting up to 5m0s for pod "pod-secrets-e9a952e4-5e4a-4592-8071-7940bd69614f" in namespace "secrets-5072" to be "success or failure"
Jul  4 10:00:11.084: INFO: Pod "pod-secrets-e9a952e4-5e4a-4592-8071-7940bd69614f": Phase="Pending", Reason="", readiness=false. Elapsed: 464.369433ms
Jul  4 10:00:13.495: INFO: Pod "pod-secrets-e9a952e4-5e4a-4592-8071-7940bd69614f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.875033667s
Jul  4 10:00:15.632: INFO: Pod "pod-secrets-e9a952e4-5e4a-4592-8071-7940bd69614f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.01207028s
Jul  4 10:00:17.635: INFO: Pod "pod-secrets-e9a952e4-5e4a-4592-8071-7940bd69614f": Phase="Running", Reason="", readiness=true. Elapsed: 7.015644093s
Jul  4 10:00:19.640: INFO: Pod "pod-secrets-e9a952e4-5e4a-4592-8071-7940bd69614f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.020985725s
STEP: Saw pod success
Jul  4 10:00:19.641: INFO: Pod "pod-secrets-e9a952e4-5e4a-4592-8071-7940bd69614f" satisfied condition "success or failure"
Jul  4 10:00:19.643: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-e9a952e4-5e4a-4592-8071-7940bd69614f container secret-volume-test: 
STEP: delete the pod
Jul  4 10:00:19.765: INFO: Waiting for pod pod-secrets-e9a952e4-5e4a-4592-8071-7940bd69614f to disappear
Jul  4 10:00:19.769: INFO: Pod pod-secrets-e9a952e4-5e4a-4592-8071-7940bd69614f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 10:00:19.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5072" for this suite.

• [SLOW TEST:10.499 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4312,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 10:00:19.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8466.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8466.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8466.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8466.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8466.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8466.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8466.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8466.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8466.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8466.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  4 10:00:28.230: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:28.287: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:28.368: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:28.371: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:28.425: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:28.428: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:28.430: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:28.431: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:28.448: INFO: Lookups using dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8466.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8466.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local jessie_udp@dns-test-service-2.dns-8466.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8466.svc.cluster.local]

Jul  4 10:00:33.453: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:33.456: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:33.460: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:33.464: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:34.041: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:34.044: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:34.047: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:34.049: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:34.053: INFO: Lookups using dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8466.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8466.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local jessie_udp@dns-test-service-2.dns-8466.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8466.svc.cluster.local]

Jul  4 10:00:38.465: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:38.468: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:38.471: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:38.472: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:38.478: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:38.479: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:38.481: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:38.483: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:38.487: INFO: Lookups using dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8466.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8466.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local jessie_udp@dns-test-service-2.dns-8466.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8466.svc.cluster.local]

Jul  4 10:00:43.515: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:43.518: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:44.095: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:44.099: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:44.109: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:44.111: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:44.118: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:44.121: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:44.126: INFO: Lookups using dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8466.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8466.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local jessie_udp@dns-test-service-2.dns-8466.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8466.svc.cluster.local]

Jul  4 10:00:49.232: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:49.236: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:49.239: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:49.241: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:49.247: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:49.248: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:49.250: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:49.253: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:49.257: INFO: Lookups using dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8466.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8466.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local jessie_udp@dns-test-service-2.dns-8466.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8466.svc.cluster.local]

Jul  4 10:00:54.268: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:54.274: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:56.274: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:56.664: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8466.svc.cluster.local from pod dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4: the server could not find the requested resource (get pods dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4)
Jul  4 10:00:57.662: INFO: Lookups using dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8466.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8466.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8466.svc.cluster.local]

Jul  4 10:00:58.607: INFO: DNS probes using dns-8466/dns-test-95b95e99-25ac-4649-ae11-f74aa40e8df4 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 10:00:59.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8466" for this suite.

• [SLOW TEST:39.823 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":263,"skipped":4317,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 10:00:59.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  4 10:01:00.274: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  4 10:01:02.358: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 10:01:04.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 10:01:06.881: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 10:01:08.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 10:01:10.362: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 10:01:13.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 10:01:14.394: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729453660, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  4 10:01:17.567: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 10:01:17.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5424" for this suite.
STEP: Destroying namespace "webhook-5424-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.091 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":264,"skipped":4318,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 10:01:17.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 10:01:17.778: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jul  4 10:01:17.791: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:17.796: INFO: Number of nodes with available pods: 0
Jul  4 10:01:17.796: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 10:01:18.913: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:18.917: INFO: Number of nodes with available pods: 0
Jul  4 10:01:18.917: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 10:01:19.800: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:19.803: INFO: Number of nodes with available pods: 0
Jul  4 10:01:19.803: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 10:01:20.915: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:21.014: INFO: Number of nodes with available pods: 0
Jul  4 10:01:21.014: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 10:01:22.488: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:22.534: INFO: Number of nodes with available pods: 0
Jul  4 10:01:22.535: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 10:01:22.915: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:22.920: INFO: Number of nodes with available pods: 0
Jul  4 10:01:22.920: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 10:01:23.945: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:23.947: INFO: Number of nodes with available pods: 0
Jul  4 10:01:23.947: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 10:01:25.036: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:25.693: INFO: Number of nodes with available pods: 0
Jul  4 10:01:25.693: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 10:01:25.924: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:26.166: INFO: Number of nodes with available pods: 0
Jul  4 10:01:26.166: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 10:01:26.840: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:26.858: INFO: Number of nodes with available pods: 0
Jul  4 10:01:26.858: INFO: Node jerma-worker is running more than one daemon pod
Jul  4 10:01:27.800: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:27.804: INFO: Number of nodes with available pods: 2
Jul  4 10:01:27.804: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jul  4 10:01:27.857: INFO: Wrong image for pod: daemon-set-flphx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:27.857: INFO: Wrong image for pod: daemon-set-mhqcb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:27.875: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:29.644: INFO: Wrong image for pod: daemon-set-flphx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:29.644: INFO: Wrong image for pod: daemon-set-mhqcb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:29.671: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:29.884: INFO: Wrong image for pod: daemon-set-flphx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:29.885: INFO: Wrong image for pod: daemon-set-mhqcb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:29.889: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:30.969: INFO: Wrong image for pod: daemon-set-flphx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:30.969: INFO: Wrong image for pod: daemon-set-mhqcb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:30.974: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:31.879: INFO: Wrong image for pod: daemon-set-flphx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:31.879: INFO: Wrong image for pod: daemon-set-mhqcb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:31.883: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:32.939: INFO: Wrong image for pod: daemon-set-flphx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:32.939: INFO: Pod daemon-set-flphx is not available
Jul  4 10:01:32.939: INFO: Wrong image for pod: daemon-set-mhqcb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:32.943: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:34.074: INFO: Wrong image for pod: daemon-set-mhqcb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:34.074: INFO: Pod daemon-set-thqmv is not available
Jul  4 10:01:34.394: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:35.549: INFO: Wrong image for pod: daemon-set-mhqcb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:35.549: INFO: Pod daemon-set-thqmv is not available
Jul  4 10:01:35.576: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:36.005: INFO: Wrong image for pod: daemon-set-mhqcb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:36.005: INFO: Pod daemon-set-thqmv is not available
Jul  4 10:01:36.009: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:37.802: INFO: Wrong image for pod: daemon-set-mhqcb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:37.802: INFO: Pod daemon-set-thqmv is not available
Jul  4 10:01:37.805: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:38.107: INFO: Wrong image for pod: daemon-set-mhqcb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:38.107: INFO: Pod daemon-set-thqmv is not available
Jul  4 10:01:38.126: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:39.082: INFO: Wrong image for pod: daemon-set-mhqcb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:39.082: INFO: Pod daemon-set-thqmv is not available
Jul  4 10:01:39.163: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:39.908: INFO: Wrong image for pod: daemon-set-mhqcb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:39.908: INFO: Pod daemon-set-thqmv is not available
Jul  4 10:01:39.959: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:40.879: INFO: Wrong image for pod: daemon-set-mhqcb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:40.882: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:41.879: INFO: Wrong image for pod: daemon-set-mhqcb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:41.883: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:42.879: INFO: Wrong image for pod: daemon-set-mhqcb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:42.879: INFO: Pod daemon-set-mhqcb is not available
Jul  4 10:01:42.881: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:43.879: INFO: Wrong image for pod: daemon-set-mhqcb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:43.879: INFO: Pod daemon-set-mhqcb is not available
Jul  4 10:01:43.882: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:44.879: INFO: Wrong image for pod: daemon-set-mhqcb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:44.879: INFO: Pod daemon-set-mhqcb is not available
Jul  4 10:01:44.883: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:45.878: INFO: Wrong image for pod: daemon-set-mhqcb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  4 10:01:45.879: INFO: Pod daemon-set-mhqcb is not available
Jul  4 10:01:45.882: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:46.891: INFO: Pod daemon-set-tj7jw is not available
Jul  4 10:01:46.923: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Jul  4 10:01:46.928: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:46.931: INFO: Number of nodes with available pods: 1
Jul  4 10:01:46.931: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 10:01:47.935: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:48.202: INFO: Number of nodes with available pods: 1
Jul  4 10:01:48.202: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 10:01:48.935: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:48.938: INFO: Number of nodes with available pods: 1
Jul  4 10:01:48.938: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 10:01:49.951: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:49.965: INFO: Number of nodes with available pods: 1
Jul  4 10:01:49.965: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  4 10:01:50.935: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  4 10:01:50.939: INFO: Number of nodes with available pods: 2
Jul  4 10:01:50.939: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2613, will wait for the garbage collector to delete the pods
Jul  4 10:01:51.011: INFO: Deleting DaemonSet.extensions daemon-set took: 5.937358ms
Jul  4 10:01:51.311: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.262154ms
Jul  4 10:02:07.400: INFO: Number of nodes with available pods: 0
Jul  4 10:02:07.400: INFO: Number of running nodes: 0, number of available pods: 0
Jul  4 10:02:07.403: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2613/daemonsets","resourceVersion":"35825"},"items":null}

Jul  4 10:02:07.406: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2613/pods","resourceVersion":"35825"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 10:02:07.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2613" for this suite.

• [SLOW TEST:49.730 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":265,"skipped":4335,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 10:02:07.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-b9e4691f-2507-4b56-94b8-1aafd82a1cfb
STEP: Creating a pod to test consume configMaps
Jul  4 10:02:08.463: INFO: Waiting up to 5m0s for pod "pod-configmaps-c25aa234-6b3a-41ff-a5f8-df01fadbf0c0" in namespace "configmap-9354" to be "success or failure"
Jul  4 10:02:08.499: INFO: Pod "pod-configmaps-c25aa234-6b3a-41ff-a5f8-df01fadbf0c0": Phase="Pending", Reason="", readiness=false. Elapsed: 36.008541ms
Jul  4 10:02:10.567: INFO: Pod "pod-configmaps-c25aa234-6b3a-41ff-a5f8-df01fadbf0c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104213967s
Jul  4 10:02:12.611: INFO: Pod "pod-configmaps-c25aa234-6b3a-41ff-a5f8-df01fadbf0c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148371669s
Jul  4 10:02:15.582: INFO: Pod "pod-configmaps-c25aa234-6b3a-41ff-a5f8-df01fadbf0c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.118929553s
STEP: Saw pod success
Jul  4 10:02:15.582: INFO: Pod "pod-configmaps-c25aa234-6b3a-41ff-a5f8-df01fadbf0c0" satisfied condition "success or failure"
Jul  4 10:02:15.585: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-c25aa234-6b3a-41ff-a5f8-df01fadbf0c0 container configmap-volume-test: 
STEP: delete the pod
Jul  4 10:02:17.011: INFO: Waiting for pod pod-configmaps-c25aa234-6b3a-41ff-a5f8-df01fadbf0c0 to disappear
Jul  4 10:02:17.013: INFO: Pod pod-configmaps-c25aa234-6b3a-41ff-a5f8-df01fadbf0c0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 10:02:17.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9354" for this suite.

• [SLOW TEST:9.716 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4341,"failed":0}
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 10:02:17.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jul  4 10:02:29.681: INFO: 10 pods remaining
Jul  4 10:02:29.681: INFO: 10 pods has nil DeletionTimestamp
Jul  4 10:02:29.681: INFO: 
Jul  4 10:02:30.932: INFO: 10 pods remaining
Jul  4 10:02:30.932: INFO: 0 pods has nil DeletionTimestamp
Jul  4 10:02:30.932: INFO: 
Jul  4 10:02:33.832: INFO: 0 pods remaining
Jul  4 10:02:33.832: INFO: 0 pods has nil DeletionTimestamp
Jul  4 10:02:33.832: INFO: 
Jul  4 10:02:34.209: INFO: 0 pods remaining
Jul  4 10:02:34.209: INFO: 0 pods has nil DeletionTimestamp
Jul  4 10:02:34.209: INFO: 
STEP: Gathering metrics
W0704 10:02:35.827794       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  4 10:02:35.827: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 10:02:35.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3742" for this suite.

• [SLOW TEST:20.028 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":267,"skipped":4341,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 10:02:37.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0704 10:03:13.351628       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  4 10:03:13.351: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 10:03:13.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9299" for this suite.

• [SLOW TEST:36.192 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":268,"skipped":4348,"failed":0}
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 10:03:13.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-b814b495-3ea8-4631-a498-78c92976137e
STEP: Creating configMap with name cm-test-opt-upd-8cc09179-72c4-47bc-b5c0-1f8379c7dc62
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-b814b495-3ea8-4631-a498-78c92976137e
STEP: Updating configmap cm-test-opt-upd-8cc09179-72c4-47bc-b5c0-1f8379c7dc62
STEP: Creating configMap with name cm-test-opt-create-a90ebbda-470d-4e79-9ce5-5567e54a5e87
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 10:03:41.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9372" for this suite.

• [SLOW TEST:28.492 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4355,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 10:03:41.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul  4 10:03:41.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6952'
Jul  4 10:03:54.257: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  4 10:03:54.257: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Jul  4 10:03:54.288: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-sw97c]
Jul  4 10:03:54.288: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-sw97c" in namespace "kubectl-6952" to be "running and ready"
Jul  4 10:03:54.302: INFO: Pod "e2e-test-httpd-rc-sw97c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.587007ms
Jul  4 10:03:57.137: INFO: Pod "e2e-test-httpd-rc-sw97c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.849642234s
Jul  4 10:03:59.141: INFO: Pod "e2e-test-httpd-rc-sw97c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.853469816s
Jul  4 10:04:01.255: INFO: Pod "e2e-test-httpd-rc-sw97c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.967459836s
Jul  4 10:04:03.279: INFO: Pod "e2e-test-httpd-rc-sw97c": Phase="Running", Reason="", readiness=true. Elapsed: 8.991483793s
Jul  4 10:04:03.279: INFO: Pod "e2e-test-httpd-rc-sw97c" satisfied condition "running and ready"
Jul  4 10:04:03.279: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-sw97c]
Jul  4 10:04:03.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-6952'
Jul  4 10:04:03.434: INFO: stderr: ""
Jul  4 10:04:03.435: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.212. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.212. Set the 'ServerName' directive globally to suppress this message\n[Sat Jul 04 10:04:01.397824 2020] [mpm_event:notice] [pid 1:tid 140213887777640] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat Jul 04 10:04:01.397886 2020] [core:notice] [pid 1:tid 140213887777640] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530
Jul  4 10:04:03.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6952'
Jul  4 10:04:03.594: INFO: stderr: ""
Jul  4 10:04:03.594: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 10:04:03.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6952" for this suite.

• [SLOW TEST:21.750 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1521
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":278,"completed":270,"skipped":4392,"failed":0}
SSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 10:04:03.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-5b0d8137-d448-4394-ad2d-3f68fb9d6f00 in namespace container-probe-9115
Jul  4 10:04:09.760: INFO: Started pod busybox-5b0d8137-d448-4394-ad2d-3f68fb9d6f00 in namespace container-probe-9115
STEP: checking the pod's current state and verifying that restartCount is present
Jul  4 10:04:09.762: INFO: Initial restart count of pod busybox-5b0d8137-d448-4394-ad2d-3f68fb9d6f00 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 10:08:10.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9115" for this suite.

• [SLOW TEST:246.883 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4396,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 10:08:10.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 10:08:10.551: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 10:08:11.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2223" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":272,"skipped":4446,"failed":0}
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 10:08:11.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-snfn
STEP: Creating a pod to test atomic-volume-subpath
Jul  4 10:08:11.710: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-snfn" in namespace "subpath-5320" to be "success or failure"
Jul  4 10:08:11.727: INFO: Pod "pod-subpath-test-configmap-snfn": Phase="Pending", Reason="", readiness=false. Elapsed: 17.280942ms
Jul  4 10:08:13.738: INFO: Pod "pod-subpath-test-configmap-snfn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028731683s
Jul  4 10:08:15.754: INFO: Pod "pod-subpath-test-configmap-snfn": Phase="Running", Reason="", readiness=true. Elapsed: 4.044745363s
Jul  4 10:08:17.757: INFO: Pod "pod-subpath-test-configmap-snfn": Phase="Running", Reason="", readiness=true. Elapsed: 6.047226052s
Jul  4 10:08:19.760: INFO: Pod "pod-subpath-test-configmap-snfn": Phase="Running", Reason="", readiness=true. Elapsed: 8.050166624s
Jul  4 10:08:21.788: INFO: Pod "pod-subpath-test-configmap-snfn": Phase="Running", Reason="", readiness=true. Elapsed: 10.078115849s
Jul  4 10:08:23.790: INFO: Pod "pod-subpath-test-configmap-snfn": Phase="Running", Reason="", readiness=true. Elapsed: 12.080503352s
Jul  4 10:08:25.815: INFO: Pod "pod-subpath-test-configmap-snfn": Phase="Running", Reason="", readiness=true. Elapsed: 14.105590002s
Jul  4 10:08:27.818: INFO: Pod "pod-subpath-test-configmap-snfn": Phase="Running", Reason="", readiness=true. Elapsed: 16.108446114s
Jul  4 10:08:29.820: INFO: Pod "pod-subpath-test-configmap-snfn": Phase="Running", Reason="", readiness=true. Elapsed: 18.110800423s
Jul  4 10:08:31.827: INFO: Pod "pod-subpath-test-configmap-snfn": Phase="Running", Reason="", readiness=true. Elapsed: 20.117418093s
Jul  4 10:08:33.830: INFO: Pod "pod-subpath-test-configmap-snfn": Phase="Running", Reason="", readiness=true. Elapsed: 22.120645446s
Jul  4 10:08:35.896: INFO: Pod "pod-subpath-test-configmap-snfn": Phase="Running", Reason="", readiness=true. Elapsed: 24.186413355s
Jul  4 10:08:37.899: INFO: Pod "pod-subpath-test-configmap-snfn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.189149636s
STEP: Saw pod success
Jul  4 10:08:37.899: INFO: Pod "pod-subpath-test-configmap-snfn" satisfied condition "success or failure"
Jul  4 10:08:37.901: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-snfn container test-container-subpath-configmap-snfn: 
STEP: delete the pod
Jul  4 10:08:37.952: INFO: Waiting for pod pod-subpath-test-configmap-snfn to disappear
Jul  4 10:08:37.965: INFO: Pod pod-subpath-test-configmap-snfn no longer exists
STEP: Deleting pod pod-subpath-test-configmap-snfn
Jul  4 10:08:37.965: INFO: Deleting pod "pod-subpath-test-configmap-snfn" in namespace "subpath-5320"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 10:08:37.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5320" for this suite.

• [SLOW TEST:26.418 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":273,"skipped":4450,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 10:08:37.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jul  4 10:08:38.876: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jul  4 10:08:40.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729454118, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729454118, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729454118, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729454118, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  4 10:08:42.891: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729454118, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729454118, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729454118, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729454118, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  4 10:08:45.919: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 10:08:45.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 10:08:47.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-7116" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:9.144 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":274,"skipped":4494,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 10:08:47.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  4 10:08:49.062: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  4 10:08:51.074: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729454129, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729454129, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729454129, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729454128, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  4 10:08:54.208: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  4 10:08:54.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1044-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 10:08:56.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5593" for this suite.
STEP: Destroying namespace "webhook-5593-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.887 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":275,"skipped":4499,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 10:08:59.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul  4 10:09:18.580: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  4 10:09:19.888: INFO: Pod pod-with-prestop-http-hook still exists
Jul  4 10:09:21.889: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  4 10:09:21.894: INFO: Pod pod-with-prestop-http-hook still exists
Jul  4 10:09:23.889: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  4 10:09:23.894: INFO: Pod pod-with-prestop-http-hook still exists
Jul  4 10:09:25.889: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  4 10:09:25.892: INFO: Pod pod-with-prestop-http-hook still exists
Jul  4 10:09:27.889: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  4 10:09:27.894: INFO: Pod pod-with-prestop-http-hook still exists
Jul  4 10:09:29.889: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  4 10:09:29.893: INFO: Pod pod-with-prestop-http-hook still exists
Jul  4 10:09:31.889: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  4 10:09:31.892: INFO: Pod pod-with-prestop-http-hook still exists
Jul  4 10:09:33.889: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  4 10:09:33.893: INFO: Pod pod-with-prestop-http-hook still exists
Jul  4 10:09:35.889: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  4 10:09:35.973: INFO: Pod pod-with-prestop-http-hook still exists
Jul  4 10:09:37.889: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  4 10:09:37.893: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 10:09:37.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3886" for this suite.

• [SLOW TEST:38.912 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4528,"failed":0}
S
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 10:09:37.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 10:09:38.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9976" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":277,"skipped":4529,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  4 10:09:39.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Jul  4 10:09:39.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jul  4 10:09:39.405: INFO: stderr: ""
Jul  4 10:09:39.405: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  4 10:09:39.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1797" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":278,"skipped":4554,"failed":0}
SSSSSSSSSSSJul  4 10:09:39.412: INFO: Running AfterSuite actions on all nodes
Jul  4 10:09:39.412: INFO: Running AfterSuite actions on node 1
Jul  4 10:09:39.412: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4565,"failed":0}

Ran 278 of 4843 Specs in 7150.206 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4565 Skipped
PASS