I0218 21:10:44.412585 8 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0218 21:10:44.413201 8 e2e.go:109] Starting e2e run "3cf67b14-2d12-4463-85e3-4375b5ca43cc" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1582060242 - Will randomize all specs Will run 278 of 4814 specs Feb 18 21:10:44.470: INFO: >>> kubeConfig: /root/.kube/config Feb 18 21:10:44.474: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 18 21:10:44.576: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 18 21:10:44.623: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 18 21:10:44.623: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 18 21:10:44.623: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 18 21:10:44.633: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 18 21:10:44.633: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 18 21:10:44.633: INFO: e2e test version: v1.17.0 Feb 18 21:10:44.635: INFO: kube-apiserver version: v1.17.0 Feb 18 21:10:44.635: INFO: >>> kubeConfig: /root/.kube/config Feb 18 21:10:44.650: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:10:44.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns Feb 18 21:10:45.317: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2764 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2764;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2764 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2764;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2764.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2764.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2764.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2764.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2764.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2764.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2764.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2764.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2764.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2764.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2764.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2764.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2764.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 148.21.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.21.148_udp@PTR;check="$$(dig +tcp +noall +answer +search 148.21.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.21.148_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2764 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2764;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2764 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2764;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2764.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2764.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2764.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2764.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2764.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2764.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2764.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2764.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2764.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2764.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2764.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2764.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2764.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 148.21.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.21.148_udp@PTR;check="$$(dig +tcp +noall +answer +search 148.21.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.21.148_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 18 21:10:55.537: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:10:55.562: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:10:55.567: INFO: Unable to read wheezy_udp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:10:55.571: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:10:55.574: INFO: Unable to read wheezy_udp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:10:55.579: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:10:55.586: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:10:55.618: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:10:55.625: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:10:55.631: INFO: Unable to read jessie_udp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:10:55.637: INFO: Unable to read jessie_tcp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:10:55.645: INFO: Unable to read jessie_udp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:10:55.654: INFO: Unable to read jessie_tcp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:10:55.658: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:10:55.665: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:10:55.694: INFO: Lookups using dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2764 wheezy_tcp@dns-test-service.dns-2764 wheezy_udp@dns-test-service.dns-2764.svc wheezy_tcp@dns-test-service.dns-2764.svc wheezy_udp@_http._tcp.dns-test-service.dns-2764.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2764 jessie_tcp@dns-test-service.dns-2764 jessie_udp@dns-test-service.dns-2764.svc jessie_tcp@dns-test-service.dns-2764.svc jessie_udp@_http._tcp.dns-test-service.dns-2764.svc jessie_tcp@_http._tcp.dns-test-service.dns-2764.svc] Feb 18 21:11:00.712: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:00.728: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:00.746: INFO: Unable to read wheezy_udp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:00.757: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:00.771: INFO: Unable to read wheezy_udp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:00.789: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:00.842: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:00.845: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:00.851: INFO: Unable to read jessie_udp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:00.857: INFO: Unable to read jessie_tcp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:00.862: INFO: Unable to read jessie_udp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:00.868: INFO: Unable to read jessie_tcp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:00.875: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:00.913: INFO: Lookups using dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2764 wheezy_tcp@dns-test-service.dns-2764 wheezy_udp@dns-test-service.dns-2764.svc wheezy_tcp@dns-test-service.dns-2764.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2764 jessie_tcp@dns-test-service.dns-2764 jessie_udp@dns-test-service.dns-2764.svc jessie_tcp@dns-test-service.dns-2764.svc jessie_tcp@_http._tcp.dns-test-service.dns-2764.svc] Feb 18 21:11:05.704: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:05.743: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:05.752: INFO: Unable to read wheezy_udp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:05.761: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:05.766: INFO: Unable to read wheezy_udp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:05.770: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:05.835: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:05.840: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:05.843: INFO: Unable to read jessie_udp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:05.848: INFO: Unable to read jessie_tcp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:05.863: INFO: Unable to read jessie_udp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:05.874: INFO: Unable to read jessie_tcp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:05.922: INFO: Lookups using dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2764 wheezy_tcp@dns-test-service.dns-2764 wheezy_udp@dns-test-service.dns-2764.svc wheezy_tcp@dns-test-service.dns-2764.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2764 jessie_tcp@dns-test-service.dns-2764 jessie_udp@dns-test-service.dns-2764.svc jessie_tcp@dns-test-service.dns-2764.svc] Feb 18 21:11:10.718: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:10.764: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:10.778: INFO: Unable to read wheezy_udp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:10.786: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:10.793: INFO: Unable to read wheezy_udp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:10.796: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:10.832: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:10.835: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:10.839: INFO: Unable to read jessie_udp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:10.844: INFO: Unable to read jessie_tcp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:10.848: INFO: Unable to read jessie_udp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:10.852: INFO: Unable to read jessie_tcp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:10.904: INFO: Lookups using dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2764 wheezy_tcp@dns-test-service.dns-2764 wheezy_udp@dns-test-service.dns-2764.svc wheezy_tcp@dns-test-service.dns-2764.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2764 jessie_tcp@dns-test-service.dns-2764 jessie_udp@dns-test-service.dns-2764.svc jessie_tcp@dns-test-service.dns-2764.svc] Feb 18 21:11:16.164: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:16.173: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:16.178: INFO: Unable to read wheezy_udp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:16.183: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:16.188: INFO: Unable to read wheezy_udp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:16.192: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:16.242: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:16.247: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:16.251: INFO: Unable to read jessie_udp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:16.281: INFO: Unable to read jessie_tcp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:16.289: INFO: Unable to read jessie_udp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:16.294: INFO: Unable to read jessie_tcp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:16.344: INFO: Lookups using dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2764 wheezy_tcp@dns-test-service.dns-2764 wheezy_udp@dns-test-service.dns-2764.svc wheezy_tcp@dns-test-service.dns-2764.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2764 jessie_tcp@dns-test-service.dns-2764 jessie_udp@dns-test-service.dns-2764.svc jessie_tcp@dns-test-service.dns-2764.svc] Feb 18 21:11:20.706: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:20.712: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:20.717: INFO: Unable to read wheezy_udp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:20.723: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:20.749: INFO: Unable to read wheezy_udp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:20.755: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:20.904: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:20.913: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:20.917: INFO: Unable to read jessie_udp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:20.922: INFO: Unable to read jessie_tcp@dns-test-service.dns-2764 from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:20.928: INFO: Unable to read jessie_udp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:20.934: INFO: Unable to read jessie_tcp@dns-test-service.dns-2764.svc from pod dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c: the server could not find the requested resource (get pods dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c) Feb 18 21:11:21.060: INFO: Lookups using dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2764 wheezy_tcp@dns-test-service.dns-2764 wheezy_udp@dns-test-service.dns-2764.svc wheezy_tcp@dns-test-service.dns-2764.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2764 jessie_tcp@dns-test-service.dns-2764 jessie_udp@dns-test-service.dns-2764.svc jessie_tcp@dns-test-service.dns-2764.svc] Feb 18 21:11:25.877: INFO: DNS probes using dns-2764/dns-test-ca9cb13b-dfff-4345-80ab-d0dd343e247c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:11:26.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2764" for this suite. • [SLOW TEST:41.664 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":1,"skipped":29,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:11:26.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 18 21:11:26.470: INFO: Waiting up to 5m0s for pod "downwardapi-volume-80d3cfcc-ff12-442f-b038-63836fa81933" in namespace "projected-9805" to be "success or failure" Feb 18 21:11:26.501: INFO: Pod "downwardapi-volume-80d3cfcc-ff12-442f-b038-63836fa81933": Phase="Pending", Reason="", readiness=false. Elapsed: 30.5264ms Feb 18 21:11:28.510: INFO: Pod "downwardapi-volume-80d3cfcc-ff12-442f-b038-63836fa81933": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03954136s Feb 18 21:11:30.527: INFO: Pod "downwardapi-volume-80d3cfcc-ff12-442f-b038-63836fa81933": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056137351s Feb 18 21:11:32.550: INFO: Pod "downwardapi-volume-80d3cfcc-ff12-442f-b038-63836fa81933": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078996491s Feb 18 21:11:34.560: INFO: Pod "downwardapi-volume-80d3cfcc-ff12-442f-b038-63836fa81933": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088757227s Feb 18 21:11:36.573: INFO: Pod "downwardapi-volume-80d3cfcc-ff12-442f-b038-63836fa81933": Phase="Pending", Reason="", readiness=false. Elapsed: 10.102354442s Feb 18 21:11:39.216: INFO: Pod "downwardapi-volume-80d3cfcc-ff12-442f-b038-63836fa81933": Phase="Pending", Reason="", readiness=false. Elapsed: 12.745446895s Feb 18 21:11:41.223: INFO: Pod "downwardapi-volume-80d3cfcc-ff12-442f-b038-63836fa81933": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.752501817s STEP: Saw pod success Feb 18 21:11:41.223: INFO: Pod "downwardapi-volume-80d3cfcc-ff12-442f-b038-63836fa81933" satisfied condition "success or failure" Feb 18 21:11:41.230: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-80d3cfcc-ff12-442f-b038-63836fa81933 container client-container: STEP: delete the pod Feb 18 21:11:41.456: INFO: Waiting for pod downwardapi-volume-80d3cfcc-ff12-442f-b038-63836fa81933 to disappear Feb 18 21:11:41.472: INFO: Pod downwardapi-volume-80d3cfcc-ff12-442f-b038-63836fa81933 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:11:41.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9805" for this suite. • [SLOW TEST:15.172 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":45,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:11:41.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 18 21:11:42.897: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 18 21:11:44.916: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657102, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657102, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657103, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657102, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 21:11:46.929: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657102, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657102, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657103, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657102, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 21:11:48.927: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657102, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657102, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657103, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657102, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 18 21:11:51.987: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:11:52.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6511" for this suite. STEP: Destroying namespace "webhook-6511-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.060 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":3,"skipped":47,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:11:52.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-83d52f65-71aa-4610-9390-c6b3539bf3c6 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:11:52.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7075" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":4,"skipped":51,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:11:52.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Feb 18 21:11:52.747: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Feb 18 21:11:53.231: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Feb 18 21:11:55.419: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 21:11:57.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 21:11:59.425: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 21:12:01.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 21:12:03.424: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 21:12:05.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657113, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 21:12:08.166: INFO: Waited 730.496134ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:12:08.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4330" for this suite. • [SLOW TEST:16.337 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":5,"skipped":57,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:12:08.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:12:21.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1648" for this suite. • [SLOW TEST:12.327 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":63,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:12:21.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-f8dd7d23-84d7-45ad-ac63-933f266bf6a2 STEP: Creating a pod to test consume configMaps Feb 18 21:12:21.449: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7dadd488-d831-4103-9378-6706a7faefb7" in namespace "projected-1579" to be "success or failure" Feb 18 21:12:21.531: INFO: Pod "pod-projected-configmaps-7dadd488-d831-4103-9378-6706a7faefb7": Phase="Pending", Reason="", readiness=false. Elapsed: 81.836825ms Feb 18 21:12:23.539: INFO: Pod "pod-projected-configmaps-7dadd488-d831-4103-9378-6706a7faefb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089427904s Feb 18 21:12:25.656: INFO: Pod "pod-projected-configmaps-7dadd488-d831-4103-9378-6706a7faefb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206050176s Feb 18 21:12:27.663: INFO: Pod "pod-projected-configmaps-7dadd488-d831-4103-9378-6706a7faefb7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.213653895s Feb 18 21:12:29.669: INFO: Pod "pod-projected-configmaps-7dadd488-d831-4103-9378-6706a7faefb7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.219828961s Feb 18 21:12:31.677: INFO: Pod "pod-projected-configmaps-7dadd488-d831-4103-9378-6706a7faefb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.226993341s STEP: Saw pod success Feb 18 21:12:31.677: INFO: Pod "pod-projected-configmaps-7dadd488-d831-4103-9378-6706a7faefb7" satisfied condition "success or failure" Feb 18 21:12:31.682: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-7dadd488-d831-4103-9378-6706a7faefb7 container projected-configmap-volume-test: STEP: delete the pod Feb 18 21:12:32.051: INFO: Waiting for pod pod-projected-configmaps-7dadd488-d831-4103-9378-6706a7faefb7 to disappear Feb 18 21:12:32.060: INFO: Pod pod-projected-configmaps-7dadd488-d831-4103-9378-6706a7faefb7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:12:32.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1579" for this suite. • [SLOW TEST:10.796 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":122,"failed":0} SS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:12:32.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-a929f45b-91e7-4a09-9caa-d7b8b5f5aa42 STEP: Creating secret with name s-test-opt-upd-ba1eceec-61d3-40f6-b65b-5ae249fefe2c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a929f45b-91e7-4a09-9caa-d7b8b5f5aa42 STEP: Updating secret s-test-opt-upd-ba1eceec-61d3-40f6-b65b-5ae249fefe2c STEP: Creating secret with name s-test-opt-create-3ee849f8-1134-482e-b1f6-003dbf01ecd9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:12:46.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7852" for this suite. • [SLOW TEST:14.346 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":124,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:12:46.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 18 21:12:46.577: INFO: Waiting up to 5m0s for pod "downwardapi-volume-09342ff3-d532-49cc-b1d5-d84d92c26edc" in namespace "downward-api-6839" to be "success or failure" Feb 18 21:12:46.603: INFO: Pod "downwardapi-volume-09342ff3-d532-49cc-b1d5-d84d92c26edc": Phase="Pending", Reason="", readiness=false. Elapsed: 25.130875ms Feb 18 21:12:50.117: INFO: Pod "downwardapi-volume-09342ff3-d532-49cc-b1d5-d84d92c26edc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.5393807s Feb 18 21:12:52.139: INFO: Pod "downwardapi-volume-09342ff3-d532-49cc-b1d5-d84d92c26edc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.561751067s Feb 18 21:12:54.477: INFO: Pod "downwardapi-volume-09342ff3-d532-49cc-b1d5-d84d92c26edc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.89981345s Feb 18 21:12:56.489: INFO: Pod "downwardapi-volume-09342ff3-d532-49cc-b1d5-d84d92c26edc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.911132072s Feb 18 21:12:58.500: INFO: Pod "downwardapi-volume-09342ff3-d532-49cc-b1d5-d84d92c26edc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.92298574s Feb 18 21:13:00.515: INFO: Pod "downwardapi-volume-09342ff3-d532-49cc-b1d5-d84d92c26edc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.937408658s STEP: Saw pod success Feb 18 21:13:00.515: INFO: Pod "downwardapi-volume-09342ff3-d532-49cc-b1d5-d84d92c26edc" satisfied condition "success or failure" Feb 18 21:13:00.522: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod downwardapi-volume-09342ff3-d532-49cc-b1d5-d84d92c26edc container client-container: STEP: delete the pod Feb 18 21:13:01.017: INFO: Waiting for pod downwardapi-volume-09342ff3-d532-49cc-b1d5-d84d92c26edc to disappear Feb 18 21:13:01.049: INFO: Pod downwardapi-volume-09342ff3-d532-49cc-b1d5-d84d92c26edc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:13:01.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6839" for this suite. • [SLOW TEST:14.766 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":160,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:13:01.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 18 21:13:01.429: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5016 /api/v1/namespaces/watch-5016/configmaps/e2e-watch-test-label-changed 57b19173-6892-4ac0-af1b-ebe8cbff5663 9259895 0 2020-02-18 21:13:01 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 18 21:13:01.429: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5016 /api/v1/namespaces/watch-5016/configmaps/e2e-watch-test-label-changed 57b19173-6892-4ac0-af1b-ebe8cbff5663 9259896 0 2020-02-18 21:13:01 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 18 21:13:01.429: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5016 /api/v1/namespaces/watch-5016/configmaps/e2e-watch-test-label-changed 57b19173-6892-4ac0-af1b-ebe8cbff5663 9259897 0 2020-02-18 21:13:01 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 18 21:13:11.471: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5016 /api/v1/namespaces/watch-5016/configmaps/e2e-watch-test-label-changed 57b19173-6892-4ac0-af1b-ebe8cbff5663 9259935 0 2020-02-18 21:13:01 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 18 21:13:11.472: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5016 /api/v1/namespaces/watch-5016/configmaps/e2e-watch-test-label-changed 57b19173-6892-4ac0-af1b-ebe8cbff5663 9259936 0 2020-02-18 21:13:01 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Feb 18 21:13:11.472: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5016 /api/v1/namespaces/watch-5016/configmaps/e2e-watch-test-label-changed 57b19173-6892-4ac0-af1b-ebe8cbff5663 9259937 0 2020-02-18 21:13:01 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:13:11.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5016" for this suite. • [SLOW TEST:10.286 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":10,"skipped":178,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:13:11.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:13:11.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9077" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":11,"skipped":209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:13:11.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Feb 18 21:13:20.396: INFO: Successfully updated pod "annotationupdatece988328-60a1-45db-a436-3582a357fea5" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:13:22.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6244" for this suite. • [SLOW TEST:10.832 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:13:22.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 18 21:13:22.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 18 21:13:22.730: INFO: stderr: "" Feb 18 21:13:22.730: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:10:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:13:22.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2866" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":13,"skipped":270,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:13:22.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-39fc6067-9d95-4195-b5e6-16b5981dd387 STEP: Creating secret with name secret-projected-all-test-volume-4c8c9902-790e-487a-9eb0-0435b9226f3a STEP: Creating a pod to test Check all projections for projected volume plugin Feb 18 21:13:23.048: INFO: Waiting up to 5m0s for pod "projected-volume-f297125e-ed06-47d8-a1b4-f2ce6ff2d8bc" in namespace "projected-7134" to be "success or failure" Feb 18 21:13:23.091: INFO: Pod "projected-volume-f297125e-ed06-47d8-a1b4-f2ce6ff2d8bc": Phase="Pending", Reason="", readiness=false. Elapsed: 42.813911ms Feb 18 21:13:25.097: INFO: Pod "projected-volume-f297125e-ed06-47d8-a1b4-f2ce6ff2d8bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049659115s Feb 18 21:13:27.106: INFO: Pod "projected-volume-f297125e-ed06-47d8-a1b4-f2ce6ff2d8bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057817726s Feb 18 21:13:29.111: INFO: Pod "projected-volume-f297125e-ed06-47d8-a1b4-f2ce6ff2d8bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062844026s Feb 18 21:13:31.118: INFO: Pod "projected-volume-f297125e-ed06-47d8-a1b4-f2ce6ff2d8bc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069872116s Feb 18 21:13:33.122: INFO: Pod "projected-volume-f297125e-ed06-47d8-a1b4-f2ce6ff2d8bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074322635s STEP: Saw pod success Feb 18 21:13:33.122: INFO: Pod "projected-volume-f297125e-ed06-47d8-a1b4-f2ce6ff2d8bc" satisfied condition "success or failure" Feb 18 21:13:33.124: INFO: Trying to get logs from node jerma-node pod projected-volume-f297125e-ed06-47d8-a1b4-f2ce6ff2d8bc container projected-all-volume-test: STEP: delete the pod Feb 18 21:13:33.156: INFO: Waiting for pod projected-volume-f297125e-ed06-47d8-a1b4-f2ce6ff2d8bc to disappear Feb 18 21:13:33.171: INFO: Pod projected-volume-f297125e-ed06-47d8-a1b4-f2ce6ff2d8bc no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:13:33.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7134" for this suite. • [SLOW TEST:10.433 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":14,"skipped":284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:13:33.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 18 21:13:33.293: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 18 21:13:36.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5575 create -f -' Feb 18 21:13:38.669: INFO: stderr: "" Feb 18 21:13:38.669: INFO: stdout: "e2e-test-crd-publish-openapi-7609-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 18 21:13:38.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5575 delete e2e-test-crd-publish-openapi-7609-crds test-cr' Feb 18 21:13:38.831: INFO: stderr: "" Feb 18 21:13:38.831: INFO: stdout: "e2e-test-crd-publish-openapi-7609-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Feb 18 21:13:38.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5575 apply -f -' Feb 18 21:13:39.333: INFO: stderr: "" Feb 18 21:13:39.333: INFO: stdout: "e2e-test-crd-publish-openapi-7609-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 18 21:13:39.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5575 delete e2e-test-crd-publish-openapi-7609-crds test-cr' Feb 18 21:13:39.487: INFO: stderr: "" Feb 18 21:13:39.487: INFO: stdout: "e2e-test-crd-publish-openapi-7609-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Feb 18 21:13:39.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7609-crds' Feb 18 21:13:39.790: INFO: stderr: "" Feb 18 21:13:39.790: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7609-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:13:43.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5575" for this suite. • [SLOW TEST:10.060 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":15,"skipped":311,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:13:43.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:13:43.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-5210" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":16,"skipped":350,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:13:43.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7249.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7249.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7249.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7249.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7249.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7249.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7249.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7249.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7249.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7249.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7249.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 195.123.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.123.195_udp@PTR;check="$$(dig +tcp +noall +answer +search 195.123.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.123.195_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7249.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7249.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7249.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7249.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7249.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7249.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7249.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7249.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7249.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7249.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7249.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 195.123.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.123.195_udp@PTR;check="$$(dig +tcp +noall +answer +search 195.123.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.123.195_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 18 21:13:55.848: INFO: Unable to read wheezy_udp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:13:55.868: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:13:55.884: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:13:55.901: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:13:55.952: INFO: Unable to read jessie_udp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:13:55.955: INFO: Unable to read jessie_tcp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:13:55.959: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:13:55.968: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:13:56.057: INFO: Lookups using dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e failed for: [wheezy_udp@dns-test-service.dns-7249.svc.cluster.local wheezy_tcp@dns-test-service.dns-7249.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local jessie_udp@dns-test-service.dns-7249.svc.cluster.local jessie_tcp@dns-test-service.dns-7249.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local] Feb 18 21:14:01.068: INFO: Unable to read wheezy_udp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:01.077: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:01.089: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:01.100: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:01.137: INFO: Unable to read jessie_udp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:01.141: INFO: Unable to read jessie_tcp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:01.147: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:01.151: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:01.185: INFO: Lookups using dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e failed for: [wheezy_udp@dns-test-service.dns-7249.svc.cluster.local wheezy_tcp@dns-test-service.dns-7249.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local jessie_udp@dns-test-service.dns-7249.svc.cluster.local jessie_tcp@dns-test-service.dns-7249.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local] Feb 18 21:14:06.066: INFO: Unable to read wheezy_udp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:06.071: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:06.075: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:06.079: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:06.108: INFO: Unable to read jessie_udp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:06.114: INFO: Unable to read jessie_tcp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:06.120: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:06.131: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:06.154: INFO: Lookups using dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e failed for: [wheezy_udp@dns-test-service.dns-7249.svc.cluster.local wheezy_tcp@dns-test-service.dns-7249.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local jessie_udp@dns-test-service.dns-7249.svc.cluster.local jessie_tcp@dns-test-service.dns-7249.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local] Feb 18 21:14:11.069: INFO: Unable to read wheezy_udp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:11.073: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:11.077: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:11.080: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:11.100: INFO: Unable to read jessie_udp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:11.102: INFO: Unable to read jessie_tcp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:11.104: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:11.107: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:11.124: INFO: Lookups using dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e failed for: [wheezy_udp@dns-test-service.dns-7249.svc.cluster.local wheezy_tcp@dns-test-service.dns-7249.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local jessie_udp@dns-test-service.dns-7249.svc.cluster.local jessie_tcp@dns-test-service.dns-7249.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local] Feb 18 21:14:16.064: INFO: Unable to read wheezy_udp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:16.069: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:16.074: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:16.079: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:16.105: INFO: Unable to read jessie_udp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:16.108: INFO: Unable to read jessie_tcp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:16.111: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:16.114: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:16.135: INFO: Lookups using dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e failed for: [wheezy_udp@dns-test-service.dns-7249.svc.cluster.local wheezy_tcp@dns-test-service.dns-7249.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local jessie_udp@dns-test-service.dns-7249.svc.cluster.local jessie_tcp@dns-test-service.dns-7249.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local] Feb 18 21:14:21.071: INFO: Unable to read wheezy_udp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:21.076: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:21.082: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:21.088: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:21.148: INFO: Unable to read jessie_udp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:21.155: INFO: Unable to read jessie_tcp@dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:21.160: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:21.167: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local from pod dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e: the server could not find the requested resource (get pods dns-test-45c14e78-2364-45cb-a73c-464acb7e605e) Feb 18 21:14:21.208: INFO: Lookups using dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e failed for: [wheezy_udp@dns-test-service.dns-7249.svc.cluster.local wheezy_tcp@dns-test-service.dns-7249.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local jessie_udp@dns-test-service.dns-7249.svc.cluster.local jessie_tcp@dns-test-service.dns-7249.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7249.svc.cluster.local] Feb 18 21:14:26.186: INFO: DNS probes using dns-7249/dns-test-45c14e78-2364-45cb-a73c-464acb7e605e succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:14:26.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7249" for this suite. • [SLOW TEST:43.163 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":17,"skipped":355,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:14:26.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 18 21:14:26.703: INFO: Waiting up to 5m0s for pod "pod-19b38647-f6c4-43e2-9289-ab04073defdc" in namespace "emptydir-8105" to be "success or failure" Feb 18 21:14:26.709: INFO: Pod "pod-19b38647-f6c4-43e2-9289-ab04073defdc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.109645ms Feb 18 21:14:28.716: INFO: Pod "pod-19b38647-f6c4-43e2-9289-ab04073defdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012157161s Feb 18 21:14:30.723: INFO: Pod "pod-19b38647-f6c4-43e2-9289-ab04073defdc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019214081s Feb 18 21:14:32.728: INFO: Pod "pod-19b38647-f6c4-43e2-9289-ab04073defdc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024812899s Feb 18 21:14:34.733: INFO: Pod "pod-19b38647-f6c4-43e2-9289-ab04073defdc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.029795826s Feb 18 21:14:36.741: INFO: Pod "pod-19b38647-f6c4-43e2-9289-ab04073defdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.0375859s STEP: Saw pod success Feb 18 21:14:36.741: INFO: Pod "pod-19b38647-f6c4-43e2-9289-ab04073defdc" satisfied condition "success or failure" Feb 18 21:14:36.746: INFO: Trying to get logs from node jerma-node pod pod-19b38647-f6c4-43e2-9289-ab04073defdc container test-container: STEP: delete the pod Feb 18 21:14:36.815: INFO: Waiting for pod pod-19b38647-f6c4-43e2-9289-ab04073defdc to disappear Feb 18 21:14:36.888: INFO: Pod pod-19b38647-f6c4-43e2-9289-ab04073defdc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:14:36.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8105" for this suite. • [SLOW TEST:10.383 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":375,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:14:36.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 18 21:14:38.349: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 18 21:14:40.371: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657278, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657278, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657278, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657278, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 21:14:42.377: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657278, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657278, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657278, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657278, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 21:14:44.379: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657278, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657278, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657278, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657278, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 21:14:46.378: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657278, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657278, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657278, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657278, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 18 21:14:49.422: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:14:50.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1387" for this suite. STEP: Destroying namespace "webhook-1387-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.383 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":19,"skipped":413,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:14:50.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-7d2879d9-b4ad-4845-b2e4-e78fe65f134c STEP: Creating a pod to test consume configMaps Feb 18 21:14:50.417: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4d79c186-a7d3-4979-819c-70e231a0bf36" in namespace "projected-3800" to be "success or failure" Feb 18 21:14:50.525: INFO: Pod "pod-projected-configmaps-4d79c186-a7d3-4979-819c-70e231a0bf36": Phase="Pending", Reason="", readiness=false. Elapsed: 108.225073ms Feb 18 21:14:52.533: INFO: Pod "pod-projected-configmaps-4d79c186-a7d3-4979-819c-70e231a0bf36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115882345s Feb 18 21:14:54.543: INFO: Pod "pod-projected-configmaps-4d79c186-a7d3-4979-819c-70e231a0bf36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12632561s Feb 18 21:14:56.561: INFO: Pod "pod-projected-configmaps-4d79c186-a7d3-4979-819c-70e231a0bf36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144294785s Feb 18 21:14:58.571: INFO: Pod "pod-projected-configmaps-4d79c186-a7d3-4979-819c-70e231a0bf36": Phase="Pending", Reason="", readiness=false. Elapsed: 8.154710981s Feb 18 21:15:00.642: INFO: Pod "pod-projected-configmaps-4d79c186-a7d3-4979-819c-70e231a0bf36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.225277118s STEP: Saw pod success Feb 18 21:15:00.642: INFO: Pod "pod-projected-configmaps-4d79c186-a7d3-4979-819c-70e231a0bf36" satisfied condition "success or failure" Feb 18 21:15:00.646: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-4d79c186-a7d3-4979-819c-70e231a0bf36 container projected-configmap-volume-test: STEP: delete the pod Feb 18 21:15:00.712: INFO: Waiting for pod pod-projected-configmaps-4d79c186-a7d3-4979-819c-70e231a0bf36 to disappear Feb 18 21:15:00.737: INFO: Pod pod-projected-configmaps-4d79c186-a7d3-4979-819c-70e231a0bf36 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:15:00.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3800" for this suite. • [SLOW TEST:10.509 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":441,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:15:00.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:15:05.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3109" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":21,"skipped":456,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:15:05.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:15:05.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2937" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":22,"skipped":457,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:15:05.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:15:14.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4113" for this suite. • [SLOW TEST:8.194 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":487,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:15:14.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-12a119e0-d8d0-4864-88c8-aa9b45f62c5f STEP: Creating a pod to test consume configMaps Feb 18 21:15:14.261: INFO: Waiting up to 5m0s for pod "pod-configmaps-24b09ded-a1b2-4a2e-ba6a-e678cd5aa512" in namespace "configmap-5565" to be "success or failure" Feb 18 21:15:14.282: INFO: Pod "pod-configmaps-24b09ded-a1b2-4a2e-ba6a-e678cd5aa512": Phase="Pending", Reason="", readiness=false. Elapsed: 20.581908ms Feb 18 21:15:16.290: INFO: Pod "pod-configmaps-24b09ded-a1b2-4a2e-ba6a-e678cd5aa512": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028615334s Feb 18 21:15:18.297: INFO: Pod "pod-configmaps-24b09ded-a1b2-4a2e-ba6a-e678cd5aa512": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035753179s Feb 18 21:15:20.418: INFO: Pod "pod-configmaps-24b09ded-a1b2-4a2e-ba6a-e678cd5aa512": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156820702s Feb 18 21:15:22.431: INFO: Pod "pod-configmaps-24b09ded-a1b2-4a2e-ba6a-e678cd5aa512": Phase="Pending", Reason="", readiness=false. Elapsed: 8.169667997s Feb 18 21:15:24.437: INFO: Pod "pod-configmaps-24b09ded-a1b2-4a2e-ba6a-e678cd5aa512": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.175600286s STEP: Saw pod success Feb 18 21:15:24.437: INFO: Pod "pod-configmaps-24b09ded-a1b2-4a2e-ba6a-e678cd5aa512" satisfied condition "success or failure" Feb 18 21:15:24.441: INFO: Trying to get logs from node jerma-node pod pod-configmaps-24b09ded-a1b2-4a2e-ba6a-e678cd5aa512 container configmap-volume-test: STEP: delete the pod Feb 18 21:15:24.532: INFO: Waiting for pod pod-configmaps-24b09ded-a1b2-4a2e-ba6a-e678cd5aa512 to disappear Feb 18 21:15:24.550: INFO: Pod pod-configmaps-24b09ded-a1b2-4a2e-ba6a-e678cd5aa512 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:15:24.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5565" for this suite. • [SLOW TEST:10.428 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":489,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:15:24.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:15:35.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3425" for this suite. • [SLOW TEST:11.373 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":25,"skipped":493,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:15:35.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Feb 18 21:15:36.030: INFO: >>> kubeConfig: /root/.kube/config Feb 18 21:15:39.624: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:15:51.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8554" for this suite. • [SLOW TEST:15.877 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":26,"skipped":537,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:15:51.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Feb 18 21:15:52.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2879' Feb 18 21:15:52.677: INFO: stderr: "" Feb 18 21:15:52.677: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 18 21:15:52.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2879' Feb 18 21:15:52.894: INFO: stderr: "" Feb 18 21:15:52.894: INFO: stdout: "update-demo-nautilus-8wxxz update-demo-nautilus-9qbbj " Feb 18 21:15:52.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8wxxz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2879' Feb 18 21:15:53.023: INFO: stderr: "" Feb 18 21:15:53.023: INFO: stdout: "" Feb 18 21:15:53.023: INFO: update-demo-nautilus-8wxxz is created but not running Feb 18 21:15:58.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2879' Feb 18 21:15:59.186: INFO: stderr: "" Feb 18 21:15:59.186: INFO: stdout: "update-demo-nautilus-8wxxz update-demo-nautilus-9qbbj " Feb 18 21:15:59.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8wxxz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2879' Feb 18 21:16:01.376: INFO: stderr: "" Feb 18 21:16:01.376: INFO: stdout: "" Feb 18 21:16:01.376: INFO: update-demo-nautilus-8wxxz is created but not running Feb 18 21:16:06.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2879' Feb 18 21:16:06.663: INFO: stderr: "" Feb 18 21:16:06.663: INFO: stdout: "update-demo-nautilus-8wxxz update-demo-nautilus-9qbbj " Feb 18 21:16:06.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8wxxz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2879' Feb 18 21:16:06.772: INFO: stderr: "" Feb 18 21:16:06.772: INFO: stdout: "true" Feb 18 21:16:06.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8wxxz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2879' Feb 18 21:16:06.901: INFO: stderr: "" Feb 18 21:16:06.902: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 18 21:16:06.902: INFO: validating pod update-demo-nautilus-8wxxz Feb 18 21:16:06.909: INFO: got data: { "image": "nautilus.jpg" } Feb 18 21:16:06.910: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 18 21:16:06.910: INFO: update-demo-nautilus-8wxxz is verified up and running Feb 18 21:16:06.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9qbbj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2879' Feb 18 21:16:07.061: INFO: stderr: "" Feb 18 21:16:07.061: INFO: stdout: "true" Feb 18 21:16:07.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9qbbj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2879' Feb 18 21:16:07.195: INFO: stderr: "" Feb 18 21:16:07.195: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 18 21:16:07.195: INFO: validating pod update-demo-nautilus-9qbbj Feb 18 21:16:07.204: INFO: got data: { "image": "nautilus.jpg" } Feb 18 21:16:07.204: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 18 21:16:07.204: INFO: update-demo-nautilus-9qbbj is verified up and running STEP: using delete to clean up resources Feb 18 21:16:07.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2879' Feb 18 21:16:07.316: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 18 21:16:07.316: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 18 21:16:07.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2879' Feb 18 21:16:07.407: INFO: stderr: "No resources found in kubectl-2879 namespace.\n" Feb 18 21:16:07.407: INFO: stdout: "" Feb 18 21:16:07.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2879 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 18 21:16:07.555: INFO: stderr: "" Feb 18 21:16:07.555: INFO: stdout: "update-demo-nautilus-8wxxz\nupdate-demo-nautilus-9qbbj\n" Feb 18 21:16:08.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2879' Feb 18 21:16:08.666: INFO: stderr: "No resources found in kubectl-2879 namespace.\n" Feb 18 21:16:08.666: INFO: stdout: "" Feb 18 21:16:08.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2879 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 18 21:16:08.879: INFO: stderr: "" Feb 18 21:16:08.879: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:16:08.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2879" for this suite. • [SLOW TEST:17.090 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":27,"skipped":544,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:16:08.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Feb 18 21:16:09.052: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 18 21:16:09.067: INFO: Waiting for terminating namespaces to be deleted... Feb 18 21:16:09.072: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 18 21:16:09.082: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 18 21:16:09.082: INFO: Container weave ready: true, restart count 1 Feb 18 21:16:09.082: INFO: Container weave-npc ready: true, restart count 0 Feb 18 21:16:09.082: INFO: update-demo-nautilus-8wxxz from kubectl-2879 started at 2020-02-18 21:15:52 +0000 UTC (1 container statuses recorded) Feb 18 21:16:09.082: INFO: Container update-demo ready: true, restart count 0 Feb 18 21:16:09.082: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 18 21:16:09.082: INFO: Container kube-proxy ready: true, restart count 0 Feb 18 21:16:09.082: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 18 21:16:09.102: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 18 21:16:09.102: INFO: Container kube-controller-manager ready: true, restart count 13 Feb 18 21:16:09.102: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 18 21:16:09.102: INFO: Container kube-proxy ready: true, restart count 0 Feb 18 21:16:09.102: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 18 21:16:09.102: INFO: Container weave ready: true, restart count 0 Feb 18 21:16:09.102: INFO: Container weave-npc ready: true, restart count 0 Feb 18 21:16:09.102: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 18 21:16:09.102: INFO: Container kube-scheduler ready: true, restart count 17 Feb 18 21:16:09.102: INFO: update-demo-nautilus-9qbbj from kubectl-2879 started at 2020-02-18 21:15:52 +0000 UTC (1 container statuses recorded) Feb 18 21:16:09.102: INFO: Container update-demo ready: true, restart count 0 Feb 18 21:16:09.102: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 18 21:16:09.102: INFO: Container kube-apiserver ready: true, restart count 1 Feb 18 21:16:09.102: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 18 21:16:09.102: INFO: Container etcd ready: true, restart count 1 Feb 18 21:16:09.102: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 18 21:16:09.102: INFO: Container coredns ready: true, restart count 0 Feb 18 21:16:09.102: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 18 21:16:09.102: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-node STEP: verifying the node has the label node jerma-server-mvvl6gufaqub Feb 18 21:16:09.590: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Feb 18 21:16:09.590: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Feb 18 21:16:09.590: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Feb 18 21:16:09.590: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub Feb 18 21:16:09.590: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub Feb 18 21:16:09.590: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Feb 18 21:16:09.590: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node Feb 18 21:16:09.590: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Feb 18 21:16:09.590: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node Feb 18 21:16:09.590: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub Feb 18 21:16:09.590: INFO: Pod update-demo-nautilus-8wxxz requesting resource cpu=0m on Node jerma-node Feb 18 21:16:09.590: INFO: Pod update-demo-nautilus-9qbbj requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub STEP: Starting Pods to consume most of the cluster CPU. Feb 18 21:16:09.590: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node Feb 18 21:16:09.627: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-24eeb71c-f9c7-4419-9c92-6dbbe6c11170.15f49bb4c9032938], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6000/filler-pod-24eeb71c-f9c7-4419-9c92-6dbbe6c11170 to jerma-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-24eeb71c-f9c7-4419-9c92-6dbbe6c11170.15f49bb605fd1727], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-24eeb71c-f9c7-4419-9c92-6dbbe6c11170.15f49bb6e8effead], Reason = [Created], Message = [Created container filler-pod-24eeb71c-f9c7-4419-9c92-6dbbe6c11170] STEP: Considering event: Type = [Normal], Name = [filler-pod-24eeb71c-f9c7-4419-9c92-6dbbe6c11170.15f49bb7113325d1], Reason = [Started], Message = [Started container filler-pod-24eeb71c-f9c7-4419-9c92-6dbbe6c11170] STEP: Considering event: Type = [Normal], Name = [filler-pod-9f27f436-6f69-48f1-996b-a9d9b855ae2d.15f49bb4ccfe3087], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6000/filler-pod-9f27f436-6f69-48f1-996b-a9d9b855ae2d to jerma-server-mvvl6gufaqub] STEP: Considering event: Type = [Normal], Name = [filler-pod-9f27f436-6f69-48f1-996b-a9d9b855ae2d.15f49bb6101f7511], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9f27f436-6f69-48f1-996b-a9d9b855ae2d.15f49bb6da8e78e4], Reason = [Created], Message = [Created container filler-pod-9f27f436-6f69-48f1-996b-a9d9b855ae2d] STEP: Considering event: Type = [Normal], Name = [filler-pod-9f27f436-6f69-48f1-996b-a9d9b855ae2d.15f49bb6fb75d245], Reason = [Started], Message = [Started container filler-pod-9f27f436-6f69-48f1-996b-a9d9b855ae2d] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f49bb799e0574c], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f49bb79db15cbf], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node jerma-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-server-mvvl6gufaqub STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:16:23.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6000" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:14.209 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":28,"skipped":549,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:16:23.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 18 21:16:23.450: INFO: Create a RollingUpdate DaemonSet Feb 18 21:16:23.456: INFO: Check that daemon pods launch on every node of the cluster Feb 18 21:16:23.490: INFO: Number of nodes with available pods: 0 Feb 18 21:16:23.490: INFO: Node jerma-node is running more than one daemon pod Feb 18 21:16:24.515: INFO: Number of nodes with available pods: 0 Feb 18 21:16:24.515: INFO: Node jerma-node is running more than one daemon pod Feb 18 21:16:25.881: INFO: Number of nodes with available pods: 0 Feb 18 21:16:25.881: INFO: Node jerma-node is running more than one daemon pod Feb 18 21:16:27.408: INFO: Number of nodes with available pods: 0 Feb 18 21:16:27.408: INFO: Node jerma-node is running more than one daemon pod Feb 18 21:16:27.527: INFO: Number of nodes with available pods: 0 Feb 18 21:16:27.527: INFO: Node jerma-node is running more than one daemon pod Feb 18 21:16:28.574: INFO: Number of nodes with available pods: 0 Feb 18 21:16:28.574: INFO: Node jerma-node is running more than one daemon pod Feb 18 21:16:29.670: INFO: Number of nodes with available pods: 0 Feb 18 21:16:29.671: INFO: Node jerma-node is running more than one daemon pod Feb 18 21:16:32.324: INFO: Number of nodes with available pods: 0 Feb 18 21:16:32.324: INFO: Node jerma-node is running more than one daemon pod Feb 18 21:16:32.624: INFO: Number of nodes with available pods: 0 Feb 18 21:16:32.624: INFO: Node jerma-node is running more than one daemon pod Feb 18 21:16:34.597: INFO: Number of nodes with available pods: 0 Feb 18 21:16:34.597: INFO: Node jerma-node is running more than one daemon pod Feb 18 21:16:35.831: INFO: Number of nodes with available pods: 0 Feb 18 21:16:35.831: INFO: Node jerma-node is running more than one daemon pod Feb 18 21:16:36.509: INFO: Number of nodes with available pods: 0 Feb 18 21:16:36.509: INFO: Node jerma-node is running more than one daemon pod Feb 18 21:16:37.504: INFO: Number of nodes with available pods: 2 Feb 18 21:16:37.504: INFO: Number of running nodes: 2, number of available pods: 2 Feb 18 21:16:37.504: INFO: Update the DaemonSet to trigger a rollout Feb 18 21:16:37.547: INFO: Updating DaemonSet daemon-set Feb 18 21:16:45.581: INFO: Roll back the DaemonSet before rollout is complete Feb 18 21:16:45.592: INFO: Updating DaemonSet daemon-set Feb 18 21:16:45.592: INFO: Make sure DaemonSet rollback is complete Feb 18 21:16:45.778: INFO: Wrong image for pod: daemon-set-l2j95. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 18 21:16:45.778: INFO: Pod daemon-set-l2j95 is not available Feb 18 21:16:46.829: INFO: Wrong image for pod: daemon-set-l2j95. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 18 21:16:46.829: INFO: Pod daemon-set-l2j95 is not available Feb 18 21:16:47.831: INFO: Wrong image for pod: daemon-set-l2j95. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 18 21:16:47.831: INFO: Pod daemon-set-l2j95 is not available Feb 18 21:16:48.829: INFO: Wrong image for pod: daemon-set-l2j95. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 18 21:16:48.829: INFO: Pod daemon-set-l2j95 is not available Feb 18 21:16:49.827: INFO: Pod daemon-set-wt65g is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5843, will wait for the garbage collector to delete the pods Feb 18 21:16:49.949: INFO: Deleting DaemonSet.extensions daemon-set took: 34.513637ms Feb 18 21:16:50.551: INFO: Terminating DaemonSet.extensions daemon-set pods took: 601.640736ms Feb 18 21:17:03.157: INFO: Number of nodes with available pods: 0 Feb 18 21:17:03.157: INFO: Number of running nodes: 0, number of available pods: 0 Feb 18 21:17:03.164: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5843/daemonsets","resourceVersion":"9261150"},"items":null} Feb 18 21:17:03.167: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5843/pods","resourceVersion":"9261150"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:17:03.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5843" for this suite. • [SLOW TEST:40.122 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":29,"skipped":550,"failed":0} SSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:17:03.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Feb 18 21:17:03.401: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:17:16.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1966" for this suite. • [SLOW TEST:13.590 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":553,"failed":0} [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:17:16.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0218 21:17:47.547564 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 18 21:17:47.547: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:17:47.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1569" for this suite. • [SLOW TEST:30.723 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":31,"skipped":553,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:17:47.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 18 21:17:47.894: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f182a5eb-3d60-4083-a652-3cbb950c44da", Controller:(*bool)(0xc003ef378a), BlockOwnerDeletion:(*bool)(0xc003ef378b)}} Feb 18 21:17:48.002: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"73598aa4-3056-4263-a46d-af08f55ffe71", Controller:(*bool)(0xc003ef39ca), BlockOwnerDeletion:(*bool)(0xc003ef39cb)}} Feb 18 21:17:48.036: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"eecae1e7-1430-4029-91a4-53bb208bd594", Controller:(*bool)(0xc003ef3c7a), BlockOwnerDeletion:(*bool)(0xc003ef3c7b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 18 21:17:53.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4485" for this suite. • [SLOW TEST:5.532 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":32,"skipped":554,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 18 21:17:53.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 18 21:17:53.323: INFO: (0) /api/v1/nodes/jerma-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 24.049517ms)
Feb 18 21:17:53.354: INFO: (1) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 29.908933ms)
Feb 18 21:17:53.412: INFO: (2) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 58.270588ms)
Feb 18 21:17:53.452: INFO: (3) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 39.294252ms)
Feb 18 21:17:53.529: INFO: (4) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 77.298308ms)
Feb 18 21:17:53.542: INFO: (5) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.642386ms)
Feb 18 21:17:53.546: INFO: (6) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.753963ms)
Feb 18 21:17:53.550: INFO: (7) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.412096ms)
Feb 18 21:17:53.553: INFO: (8) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.429222ms)
Feb 18 21:17:53.558: INFO: (9) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.364088ms)
Feb 18 21:17:53.562: INFO: (10) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.062834ms)
Feb 18 21:17:53.569: INFO: (11) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.62415ms)
Feb 18 21:17:53.574: INFO: (12) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.441832ms)
Feb 18 21:17:53.594: INFO: (13) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.869187ms)
Feb 18 21:17:53.605: INFO: (14) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.615682ms)
Feb 18 21:17:53.611: INFO: (15) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.249001ms)
Feb 18 21:17:53.616: INFO: (16) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.451404ms)
Feb 18 21:17:53.634: INFO: (17) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.424474ms)
Feb 18 21:17:53.656: INFO: (18) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 21.601779ms)
Feb 18 21:17:53.665: INFO: (19) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.580627ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:17:53.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5062" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":278,"completed":33,"skipped":564,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:17:55.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 18 21:18:14.076: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 18 21:18:14.099: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 18 21:18:16.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 18 21:18:16.106: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 18 21:18:18.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 18 21:18:18.105: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 18 21:18:20.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 18 21:18:20.104: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 18 21:18:22.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 18 21:18:22.104: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 18 21:18:24.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 18 21:18:24.109: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:18:24.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5663" for this suite.

• [SLOW TEST:28.547 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":586,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:18:24.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 18 21:18:24.291: INFO: Waiting up to 5m0s for pod "downwardapi-volume-80d5a6ad-735c-45a9-a20b-631a9b7ee0fa" in namespace "projected-3192" to be "success or failure"
Feb 18 21:18:24.297: INFO: Pod "downwardapi-volume-80d5a6ad-735c-45a9-a20b-631a9b7ee0fa": Phase="Pending", Reason="", readiness=false. Elapsed: 5.638703ms
Feb 18 21:18:26.307: INFO: Pod "downwardapi-volume-80d5a6ad-735c-45a9-a20b-631a9b7ee0fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01589033s
Feb 18 21:18:28.322: INFO: Pod "downwardapi-volume-80d5a6ad-735c-45a9-a20b-631a9b7ee0fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030818443s
Feb 18 21:18:30.327: INFO: Pod "downwardapi-volume-80d5a6ad-735c-45a9-a20b-631a9b7ee0fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036445313s
Feb 18 21:18:32.341: INFO: Pod "downwardapi-volume-80d5a6ad-735c-45a9-a20b-631a9b7ee0fa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050425122s
Feb 18 21:18:34.349: INFO: Pod "downwardapi-volume-80d5a6ad-735c-45a9-a20b-631a9b7ee0fa": Phase="Pending", Reason="", readiness=false. Elapsed: 10.057740352s
Feb 18 21:18:36.362: INFO: Pod "downwardapi-volume-80d5a6ad-735c-45a9-a20b-631a9b7ee0fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.070757029s
STEP: Saw pod success
Feb 18 21:18:36.362: INFO: Pod "downwardapi-volume-80d5a6ad-735c-45a9-a20b-631a9b7ee0fa" satisfied condition "success or failure"
Feb 18 21:18:36.369: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-80d5a6ad-735c-45a9-a20b-631a9b7ee0fa container client-container: 
STEP: delete the pod
Feb 18 21:18:36.445: INFO: Waiting for pod downwardapi-volume-80d5a6ad-735c-45a9-a20b-631a9b7ee0fa to disappear
Feb 18 21:18:36.464: INFO: Pod downwardapi-volume-80d5a6ad-735c-45a9-a20b-631a9b7ee0fa no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:18:36.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3192" for this suite.

• [SLOW TEST:12.403 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":588,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:18:36.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-1380/configmap-test-e5ec0841-2f2d-4782-b3a8-32b918a29981
STEP: Creating a pod to test consume configMaps
Feb 18 21:18:36.813: INFO: Waiting up to 5m0s for pod "pod-configmaps-e70dc996-5112-463a-a97e-178351f80ddb" in namespace "configmap-1380" to be "success or failure"
Feb 18 21:18:36.958: INFO: Pod "pod-configmaps-e70dc996-5112-463a-a97e-178351f80ddb": Phase="Pending", Reason="", readiness=false. Elapsed: 144.585436ms
Feb 18 21:18:38.967: INFO: Pod "pod-configmaps-e70dc996-5112-463a-a97e-178351f80ddb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153957489s
Feb 18 21:18:40.978: INFO: Pod "pod-configmaps-e70dc996-5112-463a-a97e-178351f80ddb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164033706s
Feb 18 21:18:42.988: INFO: Pod "pod-configmaps-e70dc996-5112-463a-a97e-178351f80ddb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.174030499s
Feb 18 21:18:44.993: INFO: Pod "pod-configmaps-e70dc996-5112-463a-a97e-178351f80ddb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.179216452s
STEP: Saw pod success
Feb 18 21:18:44.993: INFO: Pod "pod-configmaps-e70dc996-5112-463a-a97e-178351f80ddb" satisfied condition "success or failure"
Feb 18 21:18:44.995: INFO: Trying to get logs from node jerma-node pod pod-configmaps-e70dc996-5112-463a-a97e-178351f80ddb container env-test: 
STEP: delete the pod
Feb 18 21:18:45.033: INFO: Waiting for pod pod-configmaps-e70dc996-5112-463a-a97e-178351f80ddb to disappear
Feb 18 21:18:45.039: INFO: Pod pod-configmaps-e70dc996-5112-463a-a97e-178351f80ddb no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:18:45.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1380" for this suite.

• [SLOW TEST:8.528 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":626,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:18:45.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 21:18:46.090: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 18 21:18:48.108: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657526, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657526, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657526, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657526, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:18:50.132: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657526, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657526, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657526, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657526, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:18:52.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657526, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657526, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657526, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657526, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:18:54.120: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657526, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657526, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657526, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717657526, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 21:18:57.160: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:18:57.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-817" for this suite.
STEP: Destroying namespace "webhook-817-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.442 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":37,"skipped":633,"failed":0}
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:18:57.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Feb 18 21:18:57.576: INFO: Waiting up to 5m0s for pod "var-expansion-548ac164-5468-4799-8c60-23bbe8314778" in namespace "var-expansion-5745" to be "success or failure"
Feb 18 21:18:57.652: INFO: Pod "var-expansion-548ac164-5468-4799-8c60-23bbe8314778": Phase="Pending", Reason="", readiness=false. Elapsed: 76.007111ms
Feb 18 21:18:59.658: INFO: Pod "var-expansion-548ac164-5468-4799-8c60-23bbe8314778": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082026441s
Feb 18 21:19:01.665: INFO: Pod "var-expansion-548ac164-5468-4799-8c60-23bbe8314778": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088807232s
Feb 18 21:19:03.677: INFO: Pod "var-expansion-548ac164-5468-4799-8c60-23bbe8314778": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101031991s
Feb 18 21:19:05.684: INFO: Pod "var-expansion-548ac164-5468-4799-8c60-23bbe8314778": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107474425s
Feb 18 21:19:07.693: INFO: Pod "var-expansion-548ac164-5468-4799-8c60-23bbe8314778": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.116994876s
STEP: Saw pod success
Feb 18 21:19:07.693: INFO: Pod "var-expansion-548ac164-5468-4799-8c60-23bbe8314778" satisfied condition "success or failure"
Feb 18 21:19:07.699: INFO: Trying to get logs from node jerma-node pod var-expansion-548ac164-5468-4799-8c60-23bbe8314778 container dapi-container: 
STEP: delete the pod
Feb 18 21:19:07.756: INFO: Waiting for pod var-expansion-548ac164-5468-4799-8c60-23bbe8314778 to disappear
Feb 18 21:19:07.762: INFO: Pod var-expansion-548ac164-5468-4799-8c60-23bbe8314778 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:19:07.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5745" for this suite.

• [SLOW TEST:10.275 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":634,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:19:07.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 21:19:07.922: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 18 21:19:07.960: INFO: Number of nodes with available pods: 0
Feb 18 21:19:07.960: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:19:09.187: INFO: Number of nodes with available pods: 0
Feb 18 21:19:09.187: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:19:09.977: INFO: Number of nodes with available pods: 0
Feb 18 21:19:09.977: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:19:10.974: INFO: Number of nodes with available pods: 0
Feb 18 21:19:10.975: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:19:11.972: INFO: Number of nodes with available pods: 0
Feb 18 21:19:11.972: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:19:14.316: INFO: Number of nodes with available pods: 0
Feb 18 21:19:14.316: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:19:15.022: INFO: Number of nodes with available pods: 0
Feb 18 21:19:15.022: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:19:16.023: INFO: Number of nodes with available pods: 0
Feb 18 21:19:16.023: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:19:17.044: INFO: Number of nodes with available pods: 0
Feb 18 21:19:17.045: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:19:17.972: INFO: Number of nodes with available pods: 2
Feb 18 21:19:17.972: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 18 21:19:18.037: INFO: Wrong image for pod: daemon-set-bpzvw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:18.037: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:19.084: INFO: Wrong image for pod: daemon-set-bpzvw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:19.084: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:20.069: INFO: Wrong image for pod: daemon-set-bpzvw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:20.069: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:21.191: INFO: Wrong image for pod: daemon-set-bpzvw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:21.191: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:22.058: INFO: Wrong image for pod: daemon-set-bpzvw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:22.058: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:23.058: INFO: Wrong image for pod: daemon-set-bpzvw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:23.058: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:24.058: INFO: Wrong image for pod: daemon-set-bpzvw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:24.058: INFO: Pod daemon-set-bpzvw is not available
Feb 18 21:19:24.058: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:25.056: INFO: Wrong image for pod: daemon-set-bpzvw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:25.056: INFO: Pod daemon-set-bpzvw is not available
Feb 18 21:19:25.056: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:26.056: INFO: Wrong image for pod: daemon-set-bpzvw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:26.056: INFO: Pod daemon-set-bpzvw is not available
Feb 18 21:19:26.056: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:27.056: INFO: Wrong image for pod: daemon-set-bpzvw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:27.056: INFO: Pod daemon-set-bpzvw is not available
Feb 18 21:19:27.056: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:28.059: INFO: Wrong image for pod: daemon-set-bpzvw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:28.059: INFO: Pod daemon-set-bpzvw is not available
Feb 18 21:19:28.059: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:29.058: INFO: Wrong image for pod: daemon-set-bpzvw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:29.058: INFO: Pod daemon-set-bpzvw is not available
Feb 18 21:19:29.058: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:30.056: INFO: Wrong image for pod: daemon-set-bpzvw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:30.056: INFO: Pod daemon-set-bpzvw is not available
Feb 18 21:19:30.056: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:31.057: INFO: Wrong image for pod: daemon-set-bpzvw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:31.057: INFO: Pod daemon-set-bpzvw is not available
Feb 18 21:19:31.057: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:32.058: INFO: Wrong image for pod: daemon-set-bpzvw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:32.058: INFO: Pod daemon-set-bpzvw is not available
Feb 18 21:19:32.058: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:33.059: INFO: Wrong image for pod: daemon-set-bpzvw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:33.059: INFO: Pod daemon-set-bpzvw is not available
Feb 18 21:19:33.059: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:34.362: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:34.362: INFO: Pod daemon-set-wfddf is not available
Feb 18 21:19:35.061: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:35.061: INFO: Pod daemon-set-wfddf is not available
Feb 18 21:19:36.060: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:36.061: INFO: Pod daemon-set-wfddf is not available
Feb 18 21:19:37.896: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:37.896: INFO: Pod daemon-set-wfddf is not available
Feb 18 21:19:38.413: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:38.413: INFO: Pod daemon-set-wfddf is not available
Feb 18 21:19:39.059: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:39.059: INFO: Pod daemon-set-wfddf is not available
Feb 18 21:19:40.061: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:40.061: INFO: Pod daemon-set-wfddf is not available
Feb 18 21:19:41.059: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:42.057: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:43.074: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:44.061: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:45.086: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:45.086: INFO: Pod daemon-set-kwwmh is not available
Feb 18 21:19:46.060: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:46.060: INFO: Pod daemon-set-kwwmh is not available
Feb 18 21:19:47.069: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:47.069: INFO: Pod daemon-set-kwwmh is not available
Feb 18 21:19:48.057: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:48.057: INFO: Pod daemon-set-kwwmh is not available
Feb 18 21:19:49.058: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:49.058: INFO: Pod daemon-set-kwwmh is not available
Feb 18 21:19:50.059: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:50.059: INFO: Pod daemon-set-kwwmh is not available
Feb 18 21:19:51.059: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:51.059: INFO: Pod daemon-set-kwwmh is not available
Feb 18 21:19:52.056: INFO: Wrong image for pod: daemon-set-kwwmh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 18 21:19:52.056: INFO: Pod daemon-set-kwwmh is not available
Feb 18 21:19:53.072: INFO: Pod daemon-set-9d4nl is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 18 21:19:53.083: INFO: Number of nodes with available pods: 1
Feb 18 21:19:53.083: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:19:54.095: INFO: Number of nodes with available pods: 1
Feb 18 21:19:54.095: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:19:55.106: INFO: Number of nodes with available pods: 1
Feb 18 21:19:55.106: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:19:56.091: INFO: Number of nodes with available pods: 1
Feb 18 21:19:56.091: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:19:57.096: INFO: Number of nodes with available pods: 1
Feb 18 21:19:57.096: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:19:58.095: INFO: Number of nodes with available pods: 1
Feb 18 21:19:58.095: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:19:59.098: INFO: Number of nodes with available pods: 1
Feb 18 21:19:59.099: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:20:00.097: INFO: Number of nodes with available pods: 2
Feb 18 21:20:00.097: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2462, will wait for the garbage collector to delete the pods
Feb 18 21:20:00.178: INFO: Deleting DaemonSet.extensions daemon-set took: 8.223881ms
Feb 18 21:20:00.478: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.41842ms
Feb 18 21:20:07.101: INFO: Number of nodes with available pods: 0
Feb 18 21:20:07.101: INFO: Number of running nodes: 0, number of available pods: 0
Feb 18 21:20:07.106: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2462/daemonsets","resourceVersion":"9261968"},"items":null}

Feb 18 21:20:07.110: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2462/pods","resourceVersion":"9261968"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:20:07.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2462" for this suite.

• [SLOW TEST:59.354 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":39,"skipped":651,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:20:07.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Feb 18 21:20:07.229: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 18 21:20:07.244: INFO: Waiting for terminating namespaces to be deleted...
Feb 18 21:20:07.248: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 18 21:20:07.261: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 18 21:20:07.261: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 18 21:20:07.261: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 18 21:20:07.261: INFO: 	Container weave ready: true, restart count 1
Feb 18 21:20:07.261: INFO: 	Container weave-npc ready: true, restart count 0
Feb 18 21:20:07.261: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 18 21:20:07.287: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 18 21:20:07.287: INFO: 	Container kube-controller-manager ready: true, restart count 13
Feb 18 21:20:07.287: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 18 21:20:07.287: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 18 21:20:07.287: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 18 21:20:07.287: INFO: 	Container weave ready: true, restart count 0
Feb 18 21:20:07.287: INFO: 	Container weave-npc ready: true, restart count 0
Feb 18 21:20:07.287: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 18 21:20:07.287: INFO: 	Container kube-scheduler ready: true, restart count 17
Feb 18 21:20:07.287: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 18 21:20:07.287: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 18 21:20:07.287: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 18 21:20:07.287: INFO: 	Container etcd ready: true, restart count 1
Feb 18 21:20:07.287: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 18 21:20:07.287: INFO: 	Container coredns ready: true, restart count 0
Feb 18 21:20:07.287: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 18 21:20:07.287: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f49bec10db6982], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f49bec13f6ced2], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:20:08.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8173" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":40,"skipped":662,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:20:08.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 18 21:20:24.617: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 21:20:24.629: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 18 21:20:26.630: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 21:20:26.639: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 18 21:20:28.630: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 21:20:28.636: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 18 21:20:30.630: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 21:20:30.639: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 18 21:20:32.630: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 21:20:32.664: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:20:32.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2267" for this suite.

• [SLOW TEST:24.363 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":684,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:20:32.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Feb 18 21:20:32.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1904'
Feb 18 21:20:33.152: INFO: stderr: ""
Feb 18 21:20:33.152: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 18 21:20:34.158: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 21:20:34.158: INFO: Found 0 / 1
Feb 18 21:20:35.157: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 21:20:35.157: INFO: Found 0 / 1
Feb 18 21:20:36.158: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 21:20:36.158: INFO: Found 0 / 1
Feb 18 21:20:37.816: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 21:20:37.816: INFO: Found 0 / 1
Feb 18 21:20:38.164: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 21:20:38.164: INFO: Found 0 / 1
Feb 18 21:20:39.162: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 21:20:39.162: INFO: Found 0 / 1
Feb 18 21:20:40.161: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 21:20:40.161: INFO: Found 0 / 1
Feb 18 21:20:41.161: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 21:20:41.161: INFO: Found 0 / 1
Feb 18 21:20:42.192: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 21:20:42.192: INFO: Found 0 / 1
Feb 18 21:20:43.159: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 21:20:43.159: INFO: Found 0 / 1
Feb 18 21:20:44.169: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 21:20:44.169: INFO: Found 1 / 1
Feb 18 21:20:44.169: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 18 21:20:44.173: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 21:20:44.173: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 18 21:20:44.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-d5v7v --namespace=kubectl-1904 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 18 21:20:44.302: INFO: stderr: ""
Feb 18 21:20:44.302: INFO: stdout: "pod/agnhost-master-d5v7v patched\n"
STEP: checking annotations
Feb 18 21:20:44.309: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 21:20:44.309: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:20:44.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1904" for this suite.

• [SLOW TEST:11.625 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":42,"skipped":686,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:20:44.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 18 21:20:44.441: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aae4a186-44ec-424a-a0dc-e451cb841212" in namespace "downward-api-1820" to be "success or failure"
Feb 18 21:20:44.475: INFO: Pod "downwardapi-volume-aae4a186-44ec-424a-a0dc-e451cb841212": Phase="Pending", Reason="", readiness=false. Elapsed: 33.66611ms
Feb 18 21:20:46.484: INFO: Pod "downwardapi-volume-aae4a186-44ec-424a-a0dc-e451cb841212": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04208674s
Feb 18 21:20:48.494: INFO: Pod "downwardapi-volume-aae4a186-44ec-424a-a0dc-e451cb841212": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052383612s
Feb 18 21:20:50.506: INFO: Pod "downwardapi-volume-aae4a186-44ec-424a-a0dc-e451cb841212": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064217682s
Feb 18 21:20:52.513: INFO: Pod "downwardapi-volume-aae4a186-44ec-424a-a0dc-e451cb841212": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071379136s
Feb 18 21:20:54.525: INFO: Pod "downwardapi-volume-aae4a186-44ec-424a-a0dc-e451cb841212": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.083269176s
STEP: Saw pod success
Feb 18 21:20:54.525: INFO: Pod "downwardapi-volume-aae4a186-44ec-424a-a0dc-e451cb841212" satisfied condition "success or failure"
Feb 18 21:20:54.529: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-aae4a186-44ec-424a-a0dc-e451cb841212 container client-container: 
STEP: delete the pod
Feb 18 21:20:54.711: INFO: Waiting for pod downwardapi-volume-aae4a186-44ec-424a-a0dc-e451cb841212 to disappear
Feb 18 21:20:54.714: INFO: Pod downwardapi-volume-aae4a186-44ec-424a-a0dc-e451cb841212 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:20:54.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1820" for this suite.

• [SLOW TEST:10.399 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":694,"failed":0}
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:20:54.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-4681
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-4681
STEP: Creating statefulset with conflicting port in namespace statefulset-4681
STEP: Waiting until pod test-pod will start running in namespace statefulset-4681
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4681
Feb 18 21:21:05.044: INFO: Observed stateful pod in namespace: statefulset-4681, name: ss-0, uid: 4164411b-e96e-4178-ab1f-b512cf6c82cd, status phase: Pending. Waiting for statefulset controller to delete.
Feb 18 21:21:13.065: INFO: Observed stateful pod in namespace: statefulset-4681, name: ss-0, uid: 4164411b-e96e-4178-ab1f-b512cf6c82cd, status phase: Failed. Waiting for statefulset controller to delete.
Feb 18 21:21:13.077: INFO: Observed stateful pod in namespace: statefulset-4681, name: ss-0, uid: 4164411b-e96e-4178-ab1f-b512cf6c82cd, status phase: Failed. Waiting for statefulset controller to delete.
Feb 18 21:21:13.165: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4681
STEP: Removing pod with conflicting port in namespace statefulset-4681
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4681 and will be in running state
Feb 18 21:26:13.284: FAIL: Timed out after 300.001s.
Expected
    <*errors.errorString | 0xc0034ea1d0>: {
        s: "pod ss-0 is not in running phase: Pending",
    }
to be nil
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb 18 21:26:13.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-4681'
Feb 18 21:26:15.694: INFO: stderr: ""
Feb 18 21:26:15.694: INFO: stdout: "Name:           ss-0\nNamespace:      statefulset-4681\nPriority:       0\nNode:           jerma-server-mvvl6gufaqub/\nLabels:         baz=blah\n                controller-revision-hash=ss-5d68d76f44\n                foo=bar\n                statefulset.kubernetes.io/pod-name=ss-0\nAnnotations:    \nStatus:         Pending\nIP:             \nIPs:            \nControlled By:  StatefulSet/ss\nContainers:\n  webserver:\n    Image:        docker.io/library/httpd:2.4.38-alpine\n    Port:         21017/TCP\n    Host Port:    21017/TCP\n    Environment:  \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bjxsz (ro)\nVolumes:\n  default-token-bjxsz:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-bjxsz\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason            Age   From                                Message\n  ----     ------            ----  ----                                -------\n  Warning  PodFitsHostPorts  5m2s  kubelet, jerma-server-mvvl6gufaqub  Predicate PodFitsHostPorts failed\n"
Feb 18 21:26:15.694: INFO: 
Output of kubectl describe ss-0:
Name:           ss-0
Namespace:      statefulset-4681
Priority:       0
Node:           jerma-server-mvvl6gufaqub/
Labels:         baz=blah
                controller-revision-hash=ss-5d68d76f44
                foo=bar
                statefulset.kubernetes.io/pod-name=ss-0
Annotations:    
Status:         Pending
IP:             
IPs:            
Controlled By:  StatefulSet/ss
Containers:
  webserver:
    Image:        docker.io/library/httpd:2.4.38-alpine
    Port:         21017/TCP
    Host Port:    21017/TCP
    Environment:  
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bjxsz (ro)
Volumes:
  default-token-bjxsz:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-bjxsz
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age   From                                Message
  ----     ------            ----  ----                                -------
  Warning  PodFitsHostPorts  5m2s  kubelet, jerma-server-mvvl6gufaqub  Predicate PodFitsHostPorts failed

Feb 18 21:26:15.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-4681 --tail=100'
Feb 18 21:26:15.819: INFO: rc: 1
Feb 18 21:26:15.819: INFO: 
Last 100 log lines of ss-0:

Feb 18 21:26:15.819: INFO: Deleting all statefulset in ns statefulset-4681
Feb 18 21:26:15.826: INFO: Scaling statefulset ss to 0
Feb 18 21:26:25.890: INFO: Waiting for statefulset status.replicas updated to 0
Feb 18 21:26:25.894: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "statefulset-4681".
STEP: Found 17 events.
Feb 18 21:26:25.932: INFO: At 2020-02-18 21:20:55 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful
Feb 18 21:26:25.932: INFO: At 2020-02-18 21:20:55 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-4681/ss is recreating failed Pod ss-0
Feb 18 21:26:25.932: INFO: At 2020-02-18 21:20:55 +0000 UTC - event for ss-0: {kubelet jerma-server-mvvl6gufaqub} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb 18 21:26:25.932: INFO: At 2020-02-18 21:20:56 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful
Feb 18 21:26:25.932: INFO: At 2020-02-18 21:20:57 +0000 UTC - event for ss-0: {kubelet jerma-server-mvvl6gufaqub} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb 18 21:26:25.932: INFO: At 2020-02-18 21:20:58 +0000 UTC - event for ss-0: {kubelet jerma-server-mvvl6gufaqub} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb 18 21:26:25.932: INFO: At 2020-02-18 21:20:59 +0000 UTC - event for ss-0: {kubelet jerma-server-mvvl6gufaqub} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb 18 21:26:25.932: INFO: At 2020-02-18 21:21:00 +0000 UTC - event for test-pod: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
Feb 18 21:26:25.932: INFO: At 2020-02-18 21:21:03 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.
Feb 18 21:26:25.932: INFO: At 2020-02-18 21:21:03 +0000 UTC - event for ss-0: {kubelet jerma-server-mvvl6gufaqub} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb 18 21:26:25.932: INFO: At 2020-02-18 21:21:03 +0000 UTC - event for ss-0: {kubelet jerma-server-mvvl6gufaqub} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb 18 21:26:25.932: INFO: At 2020-02-18 21:21:03 +0000 UTC - event for ss-0: {kubelet jerma-server-mvvl6gufaqub} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb 18 21:26:25.932: INFO: At 2020-02-18 21:21:03 +0000 UTC - event for test-pod: {kubelet jerma-server-mvvl6gufaqub} Created: Created container webserver
Feb 18 21:26:25.932: INFO: At 2020-02-18 21:21:03 +0000 UTC - event for test-pod: {kubelet jerma-server-mvvl6gufaqub} Started: Started container webserver
Feb 18 21:26:25.932: INFO: At 2020-02-18 21:21:04 +0000 UTC - event for ss-0: {kubelet jerma-server-mvvl6gufaqub} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb 18 21:26:25.932: INFO: At 2020-02-18 21:21:13 +0000 UTC - event for ss-0: {kubelet jerma-server-mvvl6gufaqub} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb 18 21:26:25.932: INFO: At 2020-02-18 21:21:13 +0000 UTC - event for test-pod: {kubelet jerma-server-mvvl6gufaqub} Killing: Stopping container webserver
Feb 18 21:26:25.935: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Feb 18 21:26:25.935: INFO: 
Feb 18 21:26:25.939: INFO: 
Logging node info for node jerma-node
Feb 18 21:26:25.942: INFO: Node Info: &Node{ObjectMeta:{jerma-node   /api/v1/nodes/jerma-node 6236bfb4-6b64-4c0a-82c6-f768ceeab07c 9262462 0 2020-01-04 11:59:52 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 12:00:49 +0000 UTC,LastTransitionTime:2020-01-04 12:00:49 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-18 21:22:01 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-18 21:22:01 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-18 21:22:01 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-18 21:22:01 +0000 UTC,LastTransitionTime:2020-01-04 12:00:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.250,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bdc16344252549dd902c3a5d68b22f41,SystemUUID:BDC16344-2525-49DD-902C-3A5D68B22F41,BootID:eec61fc4-8bf6-487f-8f93-ea9731fe757a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Feb 18 21:26:25.942: INFO: 
Logging kubelet events for node jerma-node
Feb 18 21:26:25.945: INFO: 
Logging pods the kubelet thinks is on node jerma-node
Feb 18 21:26:25.966: INFO: kube-proxy-dsf66 started at 2020-01-04 11:59:52 +0000 UTC (0+1 container statuses recorded)
Feb 18 21:26:25.966: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 18 21:26:25.966: INFO: weave-net-kz8lv started at 2020-01-04 11:59:52 +0000 UTC (0+2 container statuses recorded)
Feb 18 21:26:25.967: INFO: 	Container weave ready: true, restart count 1
Feb 18 21:26:25.967: INFO: 	Container weave-npc ready: true, restart count 0
W0218 21:26:25.971922       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 18 21:26:26.025: INFO: 
Latency metrics for node jerma-node
Feb 18 21:26:26.025: INFO: 
Logging node info for node jerma-server-mvvl6gufaqub
Feb 18 21:26:26.033: INFO: Node Info: &Node{ObjectMeta:{jerma-server-mvvl6gufaqub   /api/v1/nodes/jerma-server-mvvl6gufaqub a2a7fe9b-7d59-43f1-bbe3-2a69f99cabd2 9262723 0 2020-01-04 11:47:40 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-mvvl6gufaqub kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 11:48:36 +0000 UTC,LastTransitionTime:2020-01-04 11:48:36 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-18 21:23:56 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-18 21:23:56 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-18 21:23:56 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-18 21:23:56 +0000 UTC,LastTransitionTime:2020-01-04 11:48:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.1.234,},NodeAddress{Type:Hostname,Address:jerma-server-mvvl6gufaqub,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f0346566ad342efb0c9f55677d0a8ea,SystemUUID:3F034656-6AD3-42EF-B0C9-F55677D0A8EA,BootID:87dae5d0-e99d-4d31-a4e7-fbd07d84e951,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[ollivier/functest-kubernetes-security@sha256:e07875af6d375759fd233dc464382bb51d2464f6ae50a60625e41226eb1f87be ollivier/functest-kubernetes-security:latest],SizeBytes:1118568659,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662 k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:170957331,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0438efb5098a2ca634ea8c6b0d804742b733d0d13fd53cf62c73e32c659a3c39 k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:160877075,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:5215c4216a65f7e76c1895ba951a12dc1c947904a91810fc66a544ff1d7e87db k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:94431763,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Feb 18 21:26:26.033: INFO: 
Logging kubelet events for node jerma-server-mvvl6gufaqub
Feb 18 21:26:26.038: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub
Feb 18 21:26:26.048: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Feb 18 21:26:26.048: INFO: 	Container kube-controller-manager ready: true, restart count 13
Feb 18 21:26:26.048: INFO: kube-proxy-chkps started at 2020-01-04 11:48:11 +0000 UTC (0+1 container statuses recorded)
Feb 18 21:26:26.048: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 18 21:26:26.048: INFO: weave-net-z6tjf started at 2020-01-04 11:48:11 +0000 UTC (0+2 container statuses recorded)
Feb 18 21:26:26.048: INFO: 	Container weave ready: true, restart count 0
Feb 18 21:26:26.048: INFO: 	Container weave-npc ready: true, restart count 0
Feb 18 21:26:26.048: INFO: kube-scheduler-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Feb 18 21:26:26.048: INFO: 	Container kube-scheduler ready: true, restart count 17
Feb 18 21:26:26.048: INFO: kube-apiserver-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Feb 18 21:26:26.048: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 18 21:26:26.048: INFO: etcd-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Feb 18 21:26:26.048: INFO: 	Container etcd ready: true, restart count 1
Feb 18 21:26:26.048: INFO: coredns-6955765f44-bhnn4 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Feb 18 21:26:26.048: INFO: 	Container coredns ready: true, restart count 0
Feb 18 21:26:26.048: INFO: coredns-6955765f44-bwd85 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Feb 18 21:26:26.048: INFO: 	Container coredns ready: true, restart count 0
W0218 21:26:26.051988       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 18 21:26:26.080: INFO: 
Latency metrics for node jerma-server-mvvl6gufaqub
Feb 18 21:26:26.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4681" for this suite.

• Failure [331.360 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

    Feb 18 21:26:13.284: Timed out after 300.001s.
    Expected
        <*errors.errorString | 0xc0034ea1d0>: {
            s: "pod ss-0 is not in running phase: Pending",
        }
    to be nil

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:762
------------------------------
{"msg":"FAILED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":43,"skipped":697,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:26:26.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Feb 18 21:26:34.801: INFO: Successfully updated pod "annotationupdate97e641f7-b6be-48bf-8912-695dd547b18b"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:26:36.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5449" for this suite.

• [SLOW TEST:10.781 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":697,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:26:36.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-90353968-bf1b-41a5-b244-8b0e696250db
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:26:49.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6785" for this suite.

• [SLOW TEST:12.205 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":752,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:26:49.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 18 21:26:49.194: INFO: Waiting up to 5m0s for pod "pod-acc0e029-df89-4232-a04f-78d2b3a90d52" in namespace "emptydir-3897" to be "success or failure"
Feb 18 21:26:49.215: INFO: Pod "pod-acc0e029-df89-4232-a04f-78d2b3a90d52": Phase="Pending", Reason="", readiness=false. Elapsed: 20.562749ms
Feb 18 21:26:51.221: INFO: Pod "pod-acc0e029-df89-4232-a04f-78d2b3a90d52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026729902s
Feb 18 21:26:53.233: INFO: Pod "pod-acc0e029-df89-4232-a04f-78d2b3a90d52": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038749945s
Feb 18 21:26:55.239: INFO: Pod "pod-acc0e029-df89-4232-a04f-78d2b3a90d52": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044867308s
Feb 18 21:26:57.247: INFO: Pod "pod-acc0e029-df89-4232-a04f-78d2b3a90d52": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052688798s
Feb 18 21:26:59.258: INFO: Pod "pod-acc0e029-df89-4232-a04f-78d2b3a90d52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063107599s
STEP: Saw pod success
Feb 18 21:26:59.258: INFO: Pod "pod-acc0e029-df89-4232-a04f-78d2b3a90d52" satisfied condition "success or failure"
Feb 18 21:26:59.265: INFO: Trying to get logs from node jerma-node pod pod-acc0e029-df89-4232-a04f-78d2b3a90d52 container test-container: 
STEP: delete the pod
Feb 18 21:26:59.335: INFO: Waiting for pod pod-acc0e029-df89-4232-a04f-78d2b3a90d52 to disappear
Feb 18 21:26:59.346: INFO: Pod pod-acc0e029-df89-4232-a04f-78d2b3a90d52 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:26:59.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3897" for this suite.

• [SLOW TEST:10.297 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":766,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:26:59.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-13d3a750-96ad-404a-9db8-b0f5a164b929
STEP: Creating a pod to test consume configMaps
Feb 18 21:27:00.621: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-462bfad1-6399-48db-af6d-e774c051cd46" in namespace "projected-1253" to be "success or failure"
Feb 18 21:27:00.792: INFO: Pod "pod-projected-configmaps-462bfad1-6399-48db-af6d-e774c051cd46": Phase="Pending", Reason="", readiness=false. Elapsed: 170.209784ms
Feb 18 21:27:02.800: INFO: Pod "pod-projected-configmaps-462bfad1-6399-48db-af6d-e774c051cd46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178279771s
Feb 18 21:27:04.815: INFO: Pod "pod-projected-configmaps-462bfad1-6399-48db-af6d-e774c051cd46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193688596s
Feb 18 21:27:06.828: INFO: Pod "pod-projected-configmaps-462bfad1-6399-48db-af6d-e774c051cd46": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206883156s
Feb 18 21:27:08.839: INFO: Pod "pod-projected-configmaps-462bfad1-6399-48db-af6d-e774c051cd46": Phase="Pending", Reason="", readiness=false. Elapsed: 8.217227951s
Feb 18 21:27:10.846: INFO: Pod "pod-projected-configmaps-462bfad1-6399-48db-af6d-e774c051cd46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.224907156s
STEP: Saw pod success
Feb 18 21:27:10.847: INFO: Pod "pod-projected-configmaps-462bfad1-6399-48db-af6d-e774c051cd46" satisfied condition "success or failure"
Feb 18 21:27:10.851: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-462bfad1-6399-48db-af6d-e774c051cd46 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 18 21:27:11.046: INFO: Waiting for pod pod-projected-configmaps-462bfad1-6399-48db-af6d-e774c051cd46 to disappear
Feb 18 21:27:11.058: INFO: Pod pod-projected-configmaps-462bfad1-6399-48db-af6d-e774c051cd46 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:27:11.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1253" for this suite.

• [SLOW TEST:11.708 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":769,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:27:11.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Feb 18 21:27:11.243: INFO: Waiting up to 5m0s for pod "client-containers-7d1c7da1-78cb-4628-8afc-5ad56578a041" in namespace "containers-9315" to be "success or failure"
Feb 18 21:27:11.295: INFO: Pod "client-containers-7d1c7da1-78cb-4628-8afc-5ad56578a041": Phase="Pending", Reason="", readiness=false. Elapsed: 52.317385ms
Feb 18 21:27:13.301: INFO: Pod "client-containers-7d1c7da1-78cb-4628-8afc-5ad56578a041": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05805885s
Feb 18 21:27:15.308: INFO: Pod "client-containers-7d1c7da1-78cb-4628-8afc-5ad56578a041": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064478637s
Feb 18 21:27:17.344: INFO: Pod "client-containers-7d1c7da1-78cb-4628-8afc-5ad56578a041": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100377362s
Feb 18 21:27:19.349: INFO: Pod "client-containers-7d1c7da1-78cb-4628-8afc-5ad56578a041": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106022805s
STEP: Saw pod success
Feb 18 21:27:19.349: INFO: Pod "client-containers-7d1c7da1-78cb-4628-8afc-5ad56578a041" satisfied condition "success or failure"
Feb 18 21:27:19.353: INFO: Trying to get logs from node jerma-node pod client-containers-7d1c7da1-78cb-4628-8afc-5ad56578a041 container test-container: 
STEP: delete the pod
Feb 18 21:27:19.508: INFO: Waiting for pod client-containers-7d1c7da1-78cb-4628-8afc-5ad56578a041 to disappear
Feb 18 21:27:19.539: INFO: Pod client-containers-7d1c7da1-78cb-4628-8afc-5ad56578a041 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:27:19.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9315" for this suite.

• [SLOW TEST:8.628 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":825,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:27:19.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 18 21:27:27.974: INFO: &Pod{ObjectMeta:{send-events-79dac6aa-e1c5-4946-ae30-72e58e5f9590  events-1120 /api/v1/namespaces/events-1120/pods/send-events-79dac6aa-e1c5-4946-ae30-72e58e5f9590 7287e43b-99fb-46b5-a333-0775f5d8ec5e 9263378 0 2020-02-18 21:27:19 +0000 UTC   map[name:foo time:921791177] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjddr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjddr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjddr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:27:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:27:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:27:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:27:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-18 21:27:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 21:27:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://9019f56b14a21fd32195bfe7e70cd2c4482db4fe642194a2b87e06b12cdc07e8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Feb 18 21:27:29.981: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 18 21:27:31.990: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:27:32.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1120" for this suite.

• [SLOW TEST:12.393 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":49,"skipped":833,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:27:32.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Feb 18 21:27:32.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:27:47.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3025" for this suite.

• [SLOW TEST:15.648 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":50,"skipped":851,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:27:47.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 18 21:27:47.902: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7898 /api/v1/namespaces/watch-7898/configmaps/e2e-watch-test-watch-closed 9b462025-ffe8-4108-80ec-040f38fe0524 9263457 0 2020-02-18 21:27:47 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 18 21:27:47.902: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7898 /api/v1/namespaces/watch-7898/configmaps/e2e-watch-test-watch-closed 9b462025-ffe8-4108-80ec-040f38fe0524 9263458 0 2020-02-18 21:27:47 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 18 21:27:47.925: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7898 /api/v1/namespaces/watch-7898/configmaps/e2e-watch-test-watch-closed 9b462025-ffe8-4108-80ec-040f38fe0524 9263459 0 2020-02-18 21:27:47 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 18 21:27:47.925: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7898 /api/v1/namespaces/watch-7898/configmaps/e2e-watch-test-watch-closed 9b462025-ffe8-4108-80ec-040f38fe0524 9263460 0 2020-02-18 21:27:47 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:27:47.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7898" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":51,"skipped":854,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:27:47.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 21:27:48.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Feb 18 21:27:51.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3944 create -f -'
Feb 18 21:27:53.812: INFO: stderr: ""
Feb 18 21:27:53.812: INFO: stdout: "e2e-test-crd-publish-openapi-9549-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Feb 18 21:27:53.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3944 delete e2e-test-crd-publish-openapi-9549-crds test-foo'
Feb 18 21:27:54.016: INFO: stderr: ""
Feb 18 21:27:54.016: INFO: stdout: "e2e-test-crd-publish-openapi-9549-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Feb 18 21:27:54.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3944 apply -f -'
Feb 18 21:27:54.415: INFO: stderr: ""
Feb 18 21:27:54.415: INFO: stdout: "e2e-test-crd-publish-openapi-9549-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Feb 18 21:27:54.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3944 delete e2e-test-crd-publish-openapi-9549-crds test-foo'
Feb 18 21:27:54.567: INFO: stderr: ""
Feb 18 21:27:54.567: INFO: stdout: "e2e-test-crd-publish-openapi-9549-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Feb 18 21:27:54.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3944 create -f -'
Feb 18 21:27:54.812: INFO: rc: 1
Feb 18 21:27:54.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3944 apply -f -'
Feb 18 21:27:55.322: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Feb 18 21:27:55.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3944 create -f -'
Feb 18 21:27:55.646: INFO: rc: 1
Feb 18 21:27:55.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3944 apply -f -'
Feb 18 21:27:56.028: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Feb 18 21:27:56.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9549-crds'
Feb 18 21:27:56.364: INFO: stderr: ""
Feb 18 21:27:56.364: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9549-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Feb 18 21:27:56.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9549-crds.metadata'
Feb 18 21:27:56.719: INFO: stderr: ""
Feb 18 21:27:56.719: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9549-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Feb 18 21:27:56.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9549-crds.spec'
Feb 18 21:27:57.237: INFO: stderr: ""
Feb 18 21:27:57.237: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9549-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Feb 18 21:27:57.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9549-crds.spec.bars'
Feb 18 21:27:57.688: INFO: stderr: ""
Feb 18 21:27:57.688: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9549-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Feb 18 21:27:57.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9549-crds.spec.bars2'
Feb 18 21:27:58.755: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:28:02.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3944" for this suite.

• [SLOW TEST:14.442 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":52,"skipped":869,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:28:02.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Feb 18 21:28:13.160: INFO: Successfully updated pod "labelsupdate577d4e8e-e90b-4ad2-80cf-c126ff9183f2"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:28:15.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6445" for this suite.

• [SLOW TEST:12.830 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":900,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:28:15.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Feb 18 21:28:15.371: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:28:31.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5363" for this suite.

• [SLOW TEST:16.753 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":54,"skipped":914,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:28:31.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:28:43.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4090" for this suite.

• [SLOW TEST:11.252 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":55,"skipped":934,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:28:43.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 18 21:28:43.356: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ba5bc60-1e26-4dea-8bc2-a39401dfc783" in namespace "projected-8902" to be "success or failure"
Feb 18 21:28:43.383: INFO: Pod "downwardapi-volume-6ba5bc60-1e26-4dea-8bc2-a39401dfc783": Phase="Pending", Reason="", readiness=false. Elapsed: 26.734681ms
Feb 18 21:28:45.387: INFO: Pod "downwardapi-volume-6ba5bc60-1e26-4dea-8bc2-a39401dfc783": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031207435s
Feb 18 21:28:47.440: INFO: Pod "downwardapi-volume-6ba5bc60-1e26-4dea-8bc2-a39401dfc783": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083786998s
Feb 18 21:28:49.453: INFO: Pod "downwardapi-volume-6ba5bc60-1e26-4dea-8bc2-a39401dfc783": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096686862s
Feb 18 21:28:51.493: INFO: Pod "downwardapi-volume-6ba5bc60-1e26-4dea-8bc2-a39401dfc783": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.137145906s
STEP: Saw pod success
Feb 18 21:28:51.493: INFO: Pod "downwardapi-volume-6ba5bc60-1e26-4dea-8bc2-a39401dfc783" satisfied condition "success or failure"
Feb 18 21:28:51.497: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6ba5bc60-1e26-4dea-8bc2-a39401dfc783 container client-container: 
STEP: delete the pod
Feb 18 21:28:51.568: INFO: Waiting for pod downwardapi-volume-6ba5bc60-1e26-4dea-8bc2-a39401dfc783 to disappear
Feb 18 21:28:51.583: INFO: Pod downwardapi-volume-6ba5bc60-1e26-4dea-8bc2-a39401dfc783 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:28:51.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8902" for this suite.

• [SLOW TEST:8.446 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":956,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:28:51.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 21:28:51.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:29:00.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1724" for this suite.

• [SLOW TEST:8.556 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":962,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:29:00.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-kq9k
STEP: Creating a pod to test atomic-volume-subpath
Feb 18 21:29:00.400: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-kq9k" in namespace "subpath-4328" to be "success or failure"
Feb 18 21:29:00.405: INFO: Pod "pod-subpath-test-secret-kq9k": Phase="Pending", Reason="", readiness=false. Elapsed: 5.186828ms
Feb 18 21:29:02.411: INFO: Pod "pod-subpath-test-secret-kq9k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011104921s
Feb 18 21:29:04.418: INFO: Pod "pod-subpath-test-secret-kq9k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017903206s
Feb 18 21:29:06.428: INFO: Pod "pod-subpath-test-secret-kq9k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02755658s
Feb 18 21:29:08.435: INFO: Pod "pod-subpath-test-secret-kq9k": Phase="Running", Reason="", readiness=true. Elapsed: 8.034723137s
Feb 18 21:29:10.445: INFO: Pod "pod-subpath-test-secret-kq9k": Phase="Running", Reason="", readiness=true. Elapsed: 10.044676959s
Feb 18 21:29:12.968: INFO: Pod "pod-subpath-test-secret-kq9k": Phase="Running", Reason="", readiness=true. Elapsed: 12.567855202s
Feb 18 21:29:14.977: INFO: Pod "pod-subpath-test-secret-kq9k": Phase="Running", Reason="", readiness=true. Elapsed: 14.577244981s
Feb 18 21:29:16.987: INFO: Pod "pod-subpath-test-secret-kq9k": Phase="Running", Reason="", readiness=true. Elapsed: 16.586789225s
Feb 18 21:29:19.000: INFO: Pod "pod-subpath-test-secret-kq9k": Phase="Running", Reason="", readiness=true. Elapsed: 18.600138375s
Feb 18 21:29:21.006: INFO: Pod "pod-subpath-test-secret-kq9k": Phase="Running", Reason="", readiness=true. Elapsed: 20.60609641s
Feb 18 21:29:23.014: INFO: Pod "pod-subpath-test-secret-kq9k": Phase="Running", Reason="", readiness=true. Elapsed: 22.614304344s
Feb 18 21:29:25.021: INFO: Pod "pod-subpath-test-secret-kq9k": Phase="Running", Reason="", readiness=true. Elapsed: 24.621157489s
Feb 18 21:29:27.027: INFO: Pod "pod-subpath-test-secret-kq9k": Phase="Running", Reason="", readiness=true. Elapsed: 26.626869766s
Feb 18 21:29:29.032: INFO: Pod "pod-subpath-test-secret-kq9k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.631996158s
STEP: Saw pod success
Feb 18 21:29:29.032: INFO: Pod "pod-subpath-test-secret-kq9k" satisfied condition "success or failure"
Feb 18 21:29:29.056: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-kq9k container test-container-subpath-secret-kq9k: 
STEP: delete the pod
Feb 18 21:29:29.089: INFO: Waiting for pod pod-subpath-test-secret-kq9k to disappear
Feb 18 21:29:29.094: INFO: Pod pod-subpath-test-secret-kq9k no longer exists
STEP: Deleting pod pod-subpath-test-secret-kq9k
Feb 18 21:29:29.094: INFO: Deleting pod "pod-subpath-test-secret-kq9k" in namespace "subpath-4328"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:29:29.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4328" for this suite.

• [SLOW TEST:28.870 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":58,"skipped":997,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:29:29.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444
STEP: creating an pod
Feb 18 21:29:29.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-1285 -- logs-generator --log-lines-total 100 --run-duration 20s'
Feb 18 21:29:29.433: INFO: stderr: ""
Feb 18 21:29:29.434: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Feb 18 21:29:29.434: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Feb 18 21:29:29.434: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1285" to be "running and ready, or succeeded"
Feb 18 21:29:29.452: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 18.258495ms
Feb 18 21:29:31.700: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.26585217s
Feb 18 21:29:33.708: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.273489934s
Feb 18 21:29:35.751: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.317323557s
Feb 18 21:29:37.759: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.324976989s
Feb 18 21:29:39.768: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.333581735s
Feb 18 21:29:41.776: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 12.341904202s
Feb 18 21:29:41.776: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Feb 18 21:29:41.776: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Feb 18 21:29:41.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1285'
Feb 18 21:29:41.997: INFO: stderr: ""
Feb 18 21:29:41.997: INFO: stdout: "I0218 21:29:38.985584       1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/498s 583\nI0218 21:29:39.186001       1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/4mw 469\nI0218 21:29:39.385884       1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/xdmt 306\nI0218 21:29:39.585981       1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/nf5v 248\nI0218 21:29:39.785958       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/f9q 477\nI0218 21:29:39.985805       1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/967l 235\nI0218 21:29:40.185894       1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/wck 540\nI0218 21:29:40.385903       1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/pjc 205\nI0218 21:29:40.585881       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/8jf 499\nI0218 21:29:40.785700       1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/6d8 290\nI0218 21:29:40.985848       1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/b88 432\nI0218 21:29:41.185866       1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/s86 347\nI0218 21:29:41.385872       1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/6fh7 590\nI0218 21:29:41.585982       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/95z 208\nI0218 21:29:41.786605       1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/v4kh 334\n"
STEP: limiting log lines
Feb 18 21:29:41.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1285 --tail=1'
Feb 18 21:29:42.211: INFO: stderr: ""
Feb 18 21:29:42.211: INFO: stdout: "I0218 21:29:42.186085       1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/h6ht 314\n"
Feb 18 21:29:42.211: INFO: got output "I0218 21:29:42.186085       1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/h6ht 314\n"
STEP: limiting log bytes
Feb 18 21:29:42.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1285 --limit-bytes=1'
Feb 18 21:29:42.409: INFO: stderr: ""
Feb 18 21:29:42.409: INFO: stdout: "I"
Feb 18 21:29:42.409: INFO: got output "I"
STEP: exposing timestamps
Feb 18 21:29:42.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1285 --tail=1 --timestamps'
Feb 18 21:29:42.575: INFO: stderr: ""
Feb 18 21:29:42.575: INFO: stdout: "2020-02-18T21:29:42.386588495Z I0218 21:29:42.385999       1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/rgbz 211\n"
Feb 18 21:29:42.575: INFO: got output "2020-02-18T21:29:42.386588495Z I0218 21:29:42.385999       1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/rgbz 211\n"
STEP: restricting to a time range
Feb 18 21:29:45.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1285 --since=1s'
Feb 18 21:29:45.192: INFO: stderr: ""
Feb 18 21:29:45.192: INFO: stdout: "I0218 21:29:44.186186       1 logs_generator.go:76] 26 PUT /api/v1/namespaces/kube-system/pods/8k86 541\nI0218 21:29:44.385847       1 logs_generator.go:76] 27 PUT /api/v1/namespaces/kube-system/pods/gf7z 358\nI0218 21:29:44.585713       1 logs_generator.go:76] 28 POST /api/v1/namespaces/kube-system/pods/fzx 367\nI0218 21:29:44.785750       1 logs_generator.go:76] 29 PUT /api/v1/namespaces/default/pods/2hjc 574\nI0218 21:29:44.985821       1 logs_generator.go:76] 30 POST /api/v1/namespaces/ns/pods/gcds 410\n"
Feb 18 21:29:45.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1285 --since=24h'
Feb 18 21:29:45.324: INFO: stderr: ""
Feb 18 21:29:45.324: INFO: stdout: "I0218 21:29:38.985584       1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/498s 583\nI0218 21:29:39.186001       1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/4mw 469\nI0218 21:29:39.385884       1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/xdmt 306\nI0218 21:29:39.585981       1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/nf5v 248\nI0218 21:29:39.785958       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/f9q 477\nI0218 21:29:39.985805       1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/967l 235\nI0218 21:29:40.185894       1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/wck 540\nI0218 21:29:40.385903       1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/pjc 205\nI0218 21:29:40.585881       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/8jf 499\nI0218 21:29:40.785700       1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/6d8 290\nI0218 21:29:40.985848       1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/b88 432\nI0218 21:29:41.185866       1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/s86 347\nI0218 21:29:41.385872       1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/6fh7 590\nI0218 21:29:41.585982       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/95z 208\nI0218 21:29:41.786605       1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/v4kh 334\nI0218 21:29:41.986027       1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/kzr 560\nI0218 21:29:42.186085       1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/h6ht 314\nI0218 21:29:42.385999       1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/rgbz 211\nI0218 21:29:42.586007       1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/ctbx 202\nI0218 21:29:42.785942       1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/jll 578\nI0218 21:29:42.985889       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/6zn 562\nI0218 21:29:43.185875       1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/cd4 398\nI0218 21:29:43.385859       1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/sr4 337\nI0218 21:29:43.586085       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/t9gt 398\nI0218 21:29:43.786264       1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/r7r6 552\nI0218 21:29:43.986770       1 logs_generator.go:76] 25 PUT /api/v1/namespaces/ns/pods/mv6j 365\nI0218 21:29:44.186186       1 logs_generator.go:76] 26 PUT /api/v1/namespaces/kube-system/pods/8k86 541\nI0218 21:29:44.385847       1 logs_generator.go:76] 27 PUT /api/v1/namespaces/kube-system/pods/gf7z 358\nI0218 21:29:44.585713       1 logs_generator.go:76] 28 POST /api/v1/namespaces/kube-system/pods/fzx 367\nI0218 21:29:44.785750       1 logs_generator.go:76] 29 PUT /api/v1/namespaces/default/pods/2hjc 574\nI0218 21:29:44.985821       1 logs_generator.go:76] 30 POST /api/v1/namespaces/ns/pods/gcds 410\nI0218 21:29:45.185658       1 logs_generator.go:76] 31 POST /api/v1/namespaces/default/pods/n5kn 202\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
Feb 18 21:29:45.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-1285'
Feb 18 21:30:02.352: INFO: stderr: ""
Feb 18 21:30:02.352: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:30:02.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1285" for this suite.

• [SLOW TEST:33.305 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1440
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":59,"skipped":997,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:30:02.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-87b45c77-b1ee-4f3a-8abe-611395f0ecda
STEP: Creating a pod to test consume configMaps
Feb 18 21:30:02.602: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c35989b1-c3e7-42c1-a639-f38af45e04e7" in namespace "projected-4379" to be "success or failure"
Feb 18 21:30:02.615: INFO: Pod "pod-projected-configmaps-c35989b1-c3e7-42c1-a639-f38af45e04e7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.585345ms
Feb 18 21:30:04.637: INFO: Pod "pod-projected-configmaps-c35989b1-c3e7-42c1-a639-f38af45e04e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034646975s
Feb 18 21:30:06.833: INFO: Pod "pod-projected-configmaps-c35989b1-c3e7-42c1-a639-f38af45e04e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.229973263s
Feb 18 21:30:08.883: INFO: Pod "pod-projected-configmaps-c35989b1-c3e7-42c1-a639-f38af45e04e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.279964222s
Feb 18 21:30:10.889: INFO: Pod "pod-projected-configmaps-c35989b1-c3e7-42c1-a639-f38af45e04e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.286195239s
STEP: Saw pod success
Feb 18 21:30:10.889: INFO: Pod "pod-projected-configmaps-c35989b1-c3e7-42c1-a639-f38af45e04e7" satisfied condition "success or failure"
Feb 18 21:30:10.891: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-c35989b1-c3e7-42c1-a639-f38af45e04e7 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 18 21:30:10.950: INFO: Waiting for pod pod-projected-configmaps-c35989b1-c3e7-42c1-a639-f38af45e04e7 to disappear
Feb 18 21:30:10.979: INFO: Pod pod-projected-configmaps-c35989b1-c3e7-42c1-a639-f38af45e04e7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:30:10.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4379" for this suite.

• [SLOW TEST:8.582 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":998,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:30:10.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 21:30:11.198: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb 18 21:30:16.204: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 18 21:30:20.223: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb 18 21:30:22.229: INFO: Creating deployment "test-rollover-deployment"
Feb 18 21:30:22.242: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb 18 21:30:24.254: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb 18 21:30:24.262: INFO: Ensure that both replica sets have 1 created replica
Feb 18 21:30:24.270: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb 18 21:30:24.281: INFO: Updating deployment test-rollover-deployment
Feb 18 21:30:24.281: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb 18 21:30:26.310: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb 18 21:30:26.325: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb 18 21:30:26.341: INFO: all replica sets need to contain the pod-template-hash label
Feb 18 21:30:26.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658224, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:30:28.353: INFO: all replica sets need to contain the pod-template-hash label
Feb 18 21:30:28.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658224, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:30:30.357: INFO: all replica sets need to contain the pod-template-hash label
Feb 18 21:30:30.357: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658224, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:30:32.365: INFO: all replica sets need to contain the pod-template-hash label
Feb 18 21:30:32.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658224, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:30:34.354: INFO: all replica sets need to contain the pod-template-hash label
Feb 18 21:30:34.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658232, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:30:36.353: INFO: all replica sets need to contain the pod-template-hash label
Feb 18 21:30:36.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658232, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:30:38.353: INFO: all replica sets need to contain the pod-template-hash label
Feb 18 21:30:38.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658232, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:30:40.352: INFO: all replica sets need to contain the pod-template-hash label
Feb 18 21:30:40.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658232, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:30:42.349: INFO: all replica sets need to contain the pod-template-hash label
Feb 18 21:30:42.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658232, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658222, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:30:44.352: INFO: 
Feb 18 21:30:44.352: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb 18 21:30:44.362: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-7372 /apis/apps/v1/namespaces/deployment-7372/deployments/test-rollover-deployment f6ff70ba-e58b-41a9-b569-4549f0fcdf42 9264175 2 2020-02-18 21:30:22 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0040ad328  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-18 21:30:22 +0000 UTC,LastTransitionTime:2020-02-18 21:30:22 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-02-18 21:30:42 +0000 UTC,LastTransitionTime:2020-02-18 21:30:22 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb 18 21:30:44.366: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-7372 /apis/apps/v1/namespaces/deployment-7372/replicasets/test-rollover-deployment-574d6dfbff 5e56853e-2739-4fc3-8fed-52d460087fef 9264164 2 2020-02-18 21:30:24 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment f6ff70ba-e58b-41a9-b569-4549f0fcdf42 0xc004132317 0xc004132318}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004132388  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 18 21:30:44.366: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb 18 21:30:44.366: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-7372 /apis/apps/v1/namespaces/deployment-7372/replicasets/test-rollover-controller 6e3be8b1-5919-493c-b736-0b8e536e8d57 9264173 2 2020-02-18 21:30:11 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment f6ff70ba-e58b-41a9-b569-4549f0fcdf42 0xc004132247 0xc004132248}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0041322a8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 18 21:30:44.366: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-7372 /apis/apps/v1/namespaces/deployment-7372/replicasets/test-rollover-deployment-f6c94f66c 612de3bf-8f8a-47dd-a7e2-735f24ca03c2 9264113 2 2020-02-18 21:30:22 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment f6ff70ba-e58b-41a9-b569-4549f0fcdf42 0xc0041323f0 0xc0041323f1}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004132468  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 18 21:30:44.372: INFO: Pod "test-rollover-deployment-574d6dfbff-mxdrc" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-mxdrc test-rollover-deployment-574d6dfbff- deployment-7372 /api/v1/namespaces/deployment-7372/pods/test-rollover-deployment-574d6dfbff-mxdrc 9c9d0d8d-90a3-4328-89a8-cfd80d2b4d77 9264138 0 2020-02-18 21:30:24 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 5e56853e-2739-4fc3-8fed-52d460087fef 0xc0041329a7 0xc0041329a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h68s5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h68s5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h68s5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:30:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:30:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:30:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:30:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-18 21:30:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 21:30:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://4d7f815460debeb9ba631c5c27a5fd69a045de64fd4f3be0da4094b24193644a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:30:44.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7372" for this suite.

• [SLOW TEST:33.384 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":61,"skipped":1006,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:30:44.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 18 21:30:44.567: INFO: Waiting up to 5m0s for pod "downwardapi-volume-170bf578-5a9f-42a6-ab03-e245e0439418" in namespace "downward-api-9256" to be "success or failure"
Feb 18 21:30:44.592: INFO: Pod "downwardapi-volume-170bf578-5a9f-42a6-ab03-e245e0439418": Phase="Pending", Reason="", readiness=false. Elapsed: 24.377029ms
Feb 18 21:30:46.603: INFO: Pod "downwardapi-volume-170bf578-5a9f-42a6-ab03-e245e0439418": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035660836s
Feb 18 21:30:48.614: INFO: Pod "downwardapi-volume-170bf578-5a9f-42a6-ab03-e245e0439418": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047048267s
Feb 18 21:30:50.664: INFO: Pod "downwardapi-volume-170bf578-5a9f-42a6-ab03-e245e0439418": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0965105s
Feb 18 21:30:52.672: INFO: Pod "downwardapi-volume-170bf578-5a9f-42a6-ab03-e245e0439418": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104715879s
Feb 18 21:30:54.685: INFO: Pod "downwardapi-volume-170bf578-5a9f-42a6-ab03-e245e0439418": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118188575s
STEP: Saw pod success
Feb 18 21:30:54.685: INFO: Pod "downwardapi-volume-170bf578-5a9f-42a6-ab03-e245e0439418" satisfied condition "success or failure"
Feb 18 21:30:54.688: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-170bf578-5a9f-42a6-ab03-e245e0439418 container client-container: 
STEP: delete the pod
Feb 18 21:30:54.783: INFO: Waiting for pod downwardapi-volume-170bf578-5a9f-42a6-ab03-e245e0439418 to disappear
Feb 18 21:30:54.796: INFO: Pod downwardapi-volume-170bf578-5a9f-42a6-ab03-e245e0439418 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:30:54.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9256" for this suite.

• [SLOW TEST:10.444 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1007,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:30:54.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-cf325e66-008c-401d-abdc-5ec5386a00de
STEP: Creating a pod to test consume secrets
Feb 18 21:30:54.999: INFO: Waiting up to 5m0s for pod "pod-secrets-8795e5b8-f8ee-4c2a-a727-d733c4a72db0" in namespace "secrets-1272" to be "success or failure"
Feb 18 21:30:55.011: INFO: Pod "pod-secrets-8795e5b8-f8ee-4c2a-a727-d733c4a72db0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.251691ms
Feb 18 21:30:57.017: INFO: Pod "pod-secrets-8795e5b8-f8ee-4c2a-a727-d733c4a72db0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018388504s
Feb 18 21:30:59.025: INFO: Pod "pod-secrets-8795e5b8-f8ee-4c2a-a727-d733c4a72db0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026007573s
Feb 18 21:31:01.036: INFO: Pod "pod-secrets-8795e5b8-f8ee-4c2a-a727-d733c4a72db0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036844962s
Feb 18 21:31:03.047: INFO: Pod "pod-secrets-8795e5b8-f8ee-4c2a-a727-d733c4a72db0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04783702s
Feb 18 21:31:05.053: INFO: Pod "pod-secrets-8795e5b8-f8ee-4c2a-a727-d733c4a72db0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053835616s
STEP: Saw pod success
Feb 18 21:31:05.053: INFO: Pod "pod-secrets-8795e5b8-f8ee-4c2a-a727-d733c4a72db0" satisfied condition "success or failure"
Feb 18 21:31:05.068: INFO: Trying to get logs from node jerma-node pod pod-secrets-8795e5b8-f8ee-4c2a-a727-d733c4a72db0 container secret-volume-test: 
STEP: delete the pod
Feb 18 21:31:05.112: INFO: Waiting for pod pod-secrets-8795e5b8-f8ee-4c2a-a727-d733c4a72db0 to disappear
Feb 18 21:31:05.121: INFO: Pod pod-secrets-8795e5b8-f8ee-4c2a-a727-d733c4a72db0 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:31:05.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1272" for this suite.

• [SLOW TEST:10.382 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1008,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:31:05.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Feb 18 21:31:05.906: INFO: created pod pod-service-account-defaultsa
Feb 18 21:31:05.906: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 18 21:31:05.985: INFO: created pod pod-service-account-mountsa
Feb 18 21:31:05.985: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 18 21:31:06.015: INFO: created pod pod-service-account-nomountsa
Feb 18 21:31:06.015: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 18 21:31:06.057: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 18 21:31:06.057: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 18 21:31:06.079: INFO: created pod pod-service-account-mountsa-mountspec
Feb 18 21:31:06.079: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 18 21:31:06.138: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 18 21:31:06.138: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 18 21:31:06.150: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 18 21:31:06.150: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 18 21:31:06.202: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 18 21:31:06.203: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 18 21:31:09.763: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 18 21:31:09.763: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:31:09.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8480" for this suite.

• [SLOW TEST:6.245 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":64,"skipped":1033,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:31:11.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 21:31:15.569: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 18 21:31:15.804: INFO: Number of nodes with available pods: 0
Feb 18 21:31:15.804: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 18 21:31:16.027: INFO: Number of nodes with available pods: 0
Feb 18 21:31:16.027: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:17.080: INFO: Number of nodes with available pods: 0
Feb 18 21:31:17.080: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:18.038: INFO: Number of nodes with available pods: 0
Feb 18 21:31:18.039: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:21.962: INFO: Number of nodes with available pods: 0
Feb 18 21:31:21.962: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:25.026: INFO: Number of nodes with available pods: 0
Feb 18 21:31:25.026: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:25.653: INFO: Number of nodes with available pods: 0
Feb 18 21:31:25.653: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:27.042: INFO: Number of nodes with available pods: 0
Feb 18 21:31:27.042: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:31.546: INFO: Number of nodes with available pods: 0
Feb 18 21:31:31.546: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:32.968: INFO: Number of nodes with available pods: 0
Feb 18 21:31:32.968: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:33.626: INFO: Number of nodes with available pods: 0
Feb 18 21:31:33.627: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:34.367: INFO: Number of nodes with available pods: 0
Feb 18 21:31:34.367: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:35.366: INFO: Number of nodes with available pods: 0
Feb 18 21:31:35.367: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:36.100: INFO: Number of nodes with available pods: 0
Feb 18 21:31:36.100: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:37.033: INFO: Number of nodes with available pods: 0
Feb 18 21:31:37.033: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:38.033: INFO: Number of nodes with available pods: 0
Feb 18 21:31:38.033: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:39.035: INFO: Number of nodes with available pods: 0
Feb 18 21:31:39.035: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:40.032: INFO: Number of nodes with available pods: 0
Feb 18 21:31:40.032: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:41.033: INFO: Number of nodes with available pods: 0
Feb 18 21:31:41.033: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:42.040: INFO: Number of nodes with available pods: 0
Feb 18 21:31:42.040: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:43.032: INFO: Number of nodes with available pods: 0
Feb 18 21:31:43.032: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:44.036: INFO: Number of nodes with available pods: 1
Feb 18 21:31:44.036: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 18 21:31:44.121: INFO: Number of nodes with available pods: 1
Feb 18 21:31:44.121: INFO: Number of running nodes: 0, number of available pods: 1
Feb 18 21:31:45.126: INFO: Number of nodes with available pods: 0
Feb 18 21:31:45.126: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 18 21:31:45.147: INFO: Number of nodes with available pods: 0
Feb 18 21:31:45.147: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:46.152: INFO: Number of nodes with available pods: 0
Feb 18 21:31:46.152: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:47.154: INFO: Number of nodes with available pods: 0
Feb 18 21:31:47.154: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:48.162: INFO: Number of nodes with available pods: 0
Feb 18 21:31:48.162: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:49.159: INFO: Number of nodes with available pods: 0
Feb 18 21:31:49.159: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:50.154: INFO: Number of nodes with available pods: 0
Feb 18 21:31:50.154: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:51.154: INFO: Number of nodes with available pods: 0
Feb 18 21:31:51.154: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:52.156: INFO: Number of nodes with available pods: 0
Feb 18 21:31:52.156: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:53.155: INFO: Number of nodes with available pods: 0
Feb 18 21:31:53.155: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:54.154: INFO: Number of nodes with available pods: 0
Feb 18 21:31:54.154: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:55.159: INFO: Number of nodes with available pods: 0
Feb 18 21:31:55.159: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:56.156: INFO: Number of nodes with available pods: 0
Feb 18 21:31:56.156: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:57.154: INFO: Number of nodes with available pods: 0
Feb 18 21:31:57.154: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:58.155: INFO: Number of nodes with available pods: 0
Feb 18 21:31:58.155: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:31:59.155: INFO: Number of nodes with available pods: 1
Feb 18 21:31:59.155: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1985, will wait for the garbage collector to delete the pods
Feb 18 21:31:59.238: INFO: Deleting DaemonSet.extensions daemon-set took: 19.552265ms
Feb 18 21:31:59.538: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.50367ms
Feb 18 21:32:12.445: INFO: Number of nodes with available pods: 0
Feb 18 21:32:12.445: INFO: Number of running nodes: 0, number of available pods: 0
Feb 18 21:32:12.448: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1985/daemonsets","resourceVersion":"9264580"},"items":null}

Feb 18 21:32:12.452: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1985/pods","resourceVersion":"9264580"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:32:12.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1985" for this suite.

• [SLOW TEST:61.056 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":65,"skipped":1042,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:32:12.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 18 21:32:12.665: INFO: Waiting up to 5m0s for pod "downwardapi-volume-57a3d292-7e57-4268-be66-d6d506520e7a" in namespace "projected-7867" to be "success or failure"
Feb 18 21:32:12.670: INFO: Pod "downwardapi-volume-57a3d292-7e57-4268-be66-d6d506520e7a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.197055ms
Feb 18 21:32:14.676: INFO: Pod "downwardapi-volume-57a3d292-7e57-4268-be66-d6d506520e7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011264311s
Feb 18 21:32:16.688: INFO: Pod "downwardapi-volume-57a3d292-7e57-4268-be66-d6d506520e7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022979962s
Feb 18 21:32:18.698: INFO: Pod "downwardapi-volume-57a3d292-7e57-4268-be66-d6d506520e7a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033409526s
Feb 18 21:32:20.706: INFO: Pod "downwardapi-volume-57a3d292-7e57-4268-be66-d6d506520e7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041473583s
STEP: Saw pod success
Feb 18 21:32:20.706: INFO: Pod "downwardapi-volume-57a3d292-7e57-4268-be66-d6d506520e7a" satisfied condition "success or failure"
Feb 18 21:32:20.713: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-57a3d292-7e57-4268-be66-d6d506520e7a container client-container: 
STEP: delete the pod
Feb 18 21:32:21.295: INFO: Waiting for pod downwardapi-volume-57a3d292-7e57-4268-be66-d6d506520e7a to disappear
Feb 18 21:32:21.310: INFO: Pod downwardapi-volume-57a3d292-7e57-4268-be66-d6d506520e7a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:32:21.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7867" for this suite.

• [SLOW TEST:8.853 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1046,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:32:21.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 21:32:21.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb 18 21:32:24.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-872 create -f -'
Feb 18 21:32:27.615: INFO: stderr: ""
Feb 18 21:32:27.615: INFO: stdout: "e2e-test-crd-publish-openapi-5713-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Feb 18 21:32:27.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-872 delete e2e-test-crd-publish-openapi-5713-crds test-cr'
Feb 18 21:32:28.171: INFO: stderr: ""
Feb 18 21:32:28.171: INFO: stdout: "e2e-test-crd-publish-openapi-5713-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Feb 18 21:32:28.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-872 apply -f -'
Feb 18 21:32:28.688: INFO: stderr: ""
Feb 18 21:32:28.689: INFO: stdout: "e2e-test-crd-publish-openapi-5713-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Feb 18 21:32:28.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-872 delete e2e-test-crd-publish-openapi-5713-crds test-cr'
Feb 18 21:32:28.841: INFO: stderr: ""
Feb 18 21:32:28.841: INFO: stdout: "e2e-test-crd-publish-openapi-5713-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Feb 18 21:32:28.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5713-crds'
Feb 18 21:32:29.260: INFO: stderr: ""
Feb 18 21:32:29.260: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5713-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:32:32.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-872" for this suite.

• [SLOW TEST:11.480 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":67,"skipped":1064,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:32:32.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 21:32:41.170: INFO: Waiting up to 5m0s for pod "client-envvars-3e05ba39-8d1a-4316-ae4c-2ce88c28c970" in namespace "pods-7259" to be "success or failure"
Feb 18 21:32:41.186: INFO: Pod "client-envvars-3e05ba39-8d1a-4316-ae4c-2ce88c28c970": Phase="Pending", Reason="", readiness=false. Elapsed: 16.45932ms
Feb 18 21:32:43.195: INFO: Pod "client-envvars-3e05ba39-8d1a-4316-ae4c-2ce88c28c970": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025083003s
Feb 18 21:32:45.201: INFO: Pod "client-envvars-3e05ba39-8d1a-4316-ae4c-2ce88c28c970": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030820154s
Feb 18 21:32:47.212: INFO: Pod "client-envvars-3e05ba39-8d1a-4316-ae4c-2ce88c28c970": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042105199s
Feb 18 21:32:49.218: INFO: Pod "client-envvars-3e05ba39-8d1a-4316-ae4c-2ce88c28c970": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048092409s
STEP: Saw pod success
Feb 18 21:32:49.218: INFO: Pod "client-envvars-3e05ba39-8d1a-4316-ae4c-2ce88c28c970" satisfied condition "success or failure"
Feb 18 21:32:49.222: INFO: Trying to get logs from node jerma-node pod client-envvars-3e05ba39-8d1a-4316-ae4c-2ce88c28c970 container env3cont: 
STEP: delete the pod
Feb 18 21:32:49.268: INFO: Waiting for pod client-envvars-3e05ba39-8d1a-4316-ae4c-2ce88c28c970 to disappear
Feb 18 21:32:49.369: INFO: Pod client-envvars-3e05ba39-8d1a-4316-ae4c-2ce88c28c970 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:32:49.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7259" for this suite.

• [SLOW TEST:16.538 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1131,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:32:49.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 21:32:49.612: INFO: Creating deployment "webserver-deployment"
Feb 18 21:32:49.618: INFO: Waiting for observed generation 1
Feb 18 21:32:51.631: INFO: Waiting for all required pods to come up
Feb 18 21:32:51.639: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 18 21:33:33.717: INFO: Waiting for deployment "webserver-deployment" to complete
Feb 18 21:33:33.722: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:10, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658370, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:33:35.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:10, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658370, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:33:37.730: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:10, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658370, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:33:39.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:10, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658370, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:33:41.733: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:10, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658370, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:33:43.736: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:10, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658370, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:33:45.732: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:10, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658370, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:33:47.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:10, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658370, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:33:49.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:10, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658370, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:33:51.763: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:10, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658370, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:33:53.732: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:10, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658370, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:33:55.732: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:10, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658370, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:33:57.734: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:10, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658370, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:33:59.733: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:10, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658370, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:34:01.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:10, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658370, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658369, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:34:03.735: INFO: Updating deployment "webserver-deployment" with a non-existent image
Feb 18 21:34:03.748: INFO: Updating deployment webserver-deployment
Feb 18 21:34:03.748: INFO: Waiting for observed generation 2
Feb 18 21:34:09.498: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 18 21:34:09.583: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 18 21:34:09.869: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Feb 18 21:34:12.636: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 18 21:34:12.636: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 18 21:34:12.641: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Feb 18 21:34:12.653: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Feb 18 21:34:12.653: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Feb 18 21:34:12.665: INFO: Updating deployment webserver-deployment
Feb 18 21:34:12.665: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Feb 18 21:34:12.944: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 18 21:34:18.417: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb 18 21:34:21.297: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-6917 /apis/apps/v1/namespaces/deployment-6917/deployments/webserver-deployment 73070eed-e3e2-4a26-90e8-d035d97f80e0 9265128 3 2020-02-18 21:32:49 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003ea8ab8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-18 21:34:12 +0000 UTC,LastTransitionTime:2020-02-18 21:34:12 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-02-18 21:34:18 +0000 UTC,LastTransitionTime:2020-02-18 21:32:49 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Feb 18 21:34:24.472: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-6917 /apis/apps/v1/namespaces/deployment-6917/replicasets/webserver-deployment-c7997dcc8 0db357fd-1f89-4b58-bf34-7ccbac912fbf 9265114 3 2020-02-18 21:34:03 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 73070eed-e3e2-4a26-90e8-d035d97f80e0 0xc002dd6ea7 0xc002dd6ea8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002dd6f18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 18 21:34:24.473: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Feb 18 21:34:24.473: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-6917 /apis/apps/v1/namespaces/deployment-6917/replicasets/webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 9265104 3 2020-02-18 21:32:49 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 73070eed-e3e2-4a26-90e8-d035d97f80e0 0xc002dd6de7 0xc002dd6de8}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002dd6e48  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Feb 18 21:34:26.167: INFO: Pod "webserver-deployment-595b5b9587-2vbpn" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-2vbpn webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-2vbpn 708b39ec-cc7d-49fb-89cb-bb82ffdd7691 9265109 0 2020-02-18 21:34:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc002dd73f7 0xc002dd73f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-18 21:34:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.168: INFO: Pod "webserver-deployment-595b5b9587-2vf57" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-2vf57 webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-2vf57 f940b822-2c22-4437-a630-6aab4b1625ad 9264899 0 2020-02-18 21:32:49 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc002dd7557 0xc002dd7558}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:32:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:33:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:33:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:32:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-02-18 21:32:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 21:33:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://4cf5061559f64b726e5405ac018a495d23ebdff9ef594deaf7c405bcfb695930,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.168: INFO: Pod "webserver-deployment-595b5b9587-627vt" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-627vt webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-627vt 69e57bc4-2c25-48b8-a401-7eb6537be49a 9265146 0 2020-02-18 21:34:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc002dd76d0 0xc002dd76d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-18 21:34:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.168: INFO: Pod "webserver-deployment-595b5b9587-8cpsk" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-8cpsk webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-8cpsk 13d03342-e861-4706-bca7-e3875c8f8e17 9265084 0 2020-02-18 21:34:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc002dd7827 0xc002dd7828}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.168: INFO: Pod "webserver-deployment-595b5b9587-9j8lw" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9j8lw webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-9j8lw edb96140-c00b-46fe-a9c4-052040f1d34a 9265126 0 2020-02-18 21:34:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc002dd7937 0xc002dd7938}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-18 21:34:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.168: INFO: Pod "webserver-deployment-595b5b9587-9zlkm" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9zlkm webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-9zlkm 5e08bbc2-8d85-433f-87a4-f01704f0d3d8 9264894 0 2020-02-18 21:32:49 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc002dd7aa7 0xc002dd7aa8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:32:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:33:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:33:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-18 21:32:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 21:33:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://5781581478c4cd86d17849f5a46cfef244d46ff9c8ebc55f82b8aa22ed8eca03,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.169: INFO: Pod "webserver-deployment-595b5b9587-b224z" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-b224z webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-b224z 79a47d2b-9c83-44ed-b3d7-847583cddf3b 9265102 0 2020-02-18 21:34:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc002dd7c20 0xc002dd7c21}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.169: INFO: Pod "webserver-deployment-595b5b9587-bclkn" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bclkn webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-bclkn 157d3f7e-8688-43b8-9660-2bbf776224c6 9265096 0 2020-02-18 21:34:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc002dd7d27 0xc002dd7d28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.169: INFO: Pod "webserver-deployment-595b5b9587-c6qmh" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-c6qmh webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-c6qmh 878acb5c-cff1-4580-aace-9be7d74f3ad0 9264904 0 2020-02-18 21:32:49 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc002dd7e37 0xc002dd7e38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:32:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:33:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:33:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-18 21:32:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 21:33:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://ff389730920bc46b2b76c82992f3abbf57bb5778633892c4a1592562c773d6bc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.169: INFO: Pod "webserver-deployment-595b5b9587-czw8s" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-czw8s webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-czw8s 5b4d5a64-50a6-4429-91fe-a7a610d3cd91 9264901 0 2020-02-18 21:32:49 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc002dd7fb0 0xc002dd7fb1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:32:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:33:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:33:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-02-18 21:32:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 21:33:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://100eb9b29a0be85a946dfc97be905a32726ab0bf2702b558bf29a23cf1c86df8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.169: INFO: Pod "webserver-deployment-595b5b9587-g965p" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-g965p webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-g965p 955b551c-fb03-4b80-aa21-afabae64b6b7 9265098 0 2020-02-18 21:34:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc0027e0120 0xc0027e0121}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.169: INFO: Pod "webserver-deployment-595b5b9587-h8226" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-h8226 webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-h8226 1b1bfd9f-fc9c-48ff-9a3c-52a5e512d0b6 9265108 0 2020-02-18 21:34:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc0027e0237 0xc0027e0238}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-18 21:34:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.169: INFO: Pod "webserver-deployment-595b5b9587-j8qrg" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-j8qrg webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-j8qrg d283d157-eba1-4e20-b6bf-e67b5e8e27da 9265099 0 2020-02-18 21:34:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc0027e0387 0xc0027e0388}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.169: INFO: Pod "webserver-deployment-595b5b9587-lss7z" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-lss7z webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-lss7z fd8e0454-a3af-42af-aac5-a0383d885f21 9264925 0 2020-02-18 21:32:49 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc0027e0617 0xc0027e0618}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:32:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:33:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:33:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.8,StartTime:2020-02-18 21:32:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 21:33:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://db2e43b01e8ef796bbf562c5f9df6b3721d759247a1cc74bf3f981afda89015e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.170: INFO: Pod "webserver-deployment-595b5b9587-n6ltj" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-n6ltj webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-n6ltj 62a52ece-2595-437a-ac7e-a03515416ef5 9264906 0 2020-02-18 21:32:49 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc0027e0bf0 0xc0027e0bf1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:32:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:33:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:33:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-02-18 21:32:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 21:33:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://772b123a2c185ad101685e671e7ac7cffc93c8a3dffe59c1399dedfe04b8df37,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.170: INFO: Pod "webserver-deployment-595b5b9587-qs7fw" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qs7fw webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-qs7fw 15760d37-e964-4401-800f-4d1b46a53e80 9264903 0 2020-02-18 21:32:49 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc0027e0e80 0xc0027e0e81}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:32:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:33:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:33:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-02-18 21:32:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 21:33:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://c74da90cd09833e7f9bb22882fb50c9f405589586f9986624de08311d609aa41,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.170: INFO: Pod "webserver-deployment-595b5b9587-rfqkj" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rfqkj webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-rfqkj 6552c50c-f5aa-4c58-bfc8-1ead19bd080b 9265101 0 2020-02-18 21:34:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc0027e11e0 0xc0027e11e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.170: INFO: Pod "webserver-deployment-595b5b9587-vt9vt" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vt9vt webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-vt9vt 4da9ddd1-a34e-4f6f-ad77-a1bb2fc8a661 9265068 0 2020-02-18 21:34:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc0027e13b7 0xc0027e13b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.170: INFO: Pod "webserver-deployment-595b5b9587-w467h" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w467h webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-w467h a89b5593-9ffb-41a1-a792-7f62c4d970aa 9265063 0 2020-02-18 21:34:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc0027e1687 0xc0027e1688}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.174: INFO: Pod "webserver-deployment-595b5b9587-xh8pc" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xh8pc webserver-deployment-595b5b9587- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-595b5b9587-xh8pc 7b44ce7a-20fd-4b90-86e2-1c3720d932dd 9264922 0 2020-02-18 21:32:49 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9cefb56d-1755-4470-b3d6-c2d16c358a9f 0xc0027e1847 0xc0027e1848}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:32:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:33:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:33:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-02-18 21:32:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 21:33:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://5150afbdd402b4968fffdeed6b29990d085792a1c203cb0aa5205033e7429a0d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.174: INFO: Pod "webserver-deployment-c7997dcc8-2k9qk" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2k9qk webserver-deployment-c7997dcc8- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-c7997dcc8-2k9qk a8db64a8-638a-4a21-b29a-23445016f2fe 9265100 0 2020-02-18 21:34:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0db357fd-1f89-4b58-bf34-7ccbac912fbf 0xc0027e1b50 0xc0027e1b51}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.174: INFO: Pod "webserver-deployment-c7997dcc8-2kxxl" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2kxxl webserver-deployment-c7997dcc8- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-c7997dcc8-2kxxl 43c70df7-2a2b-4f06-b478-4b4186467e66 9265097 0 2020-02-18 21:34:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0db357fd-1f89-4b58-bf34-7ccbac912fbf 0xc0027e1c77 0xc0027e1c78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.175: INFO: Pod "webserver-deployment-c7997dcc8-5hv24" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5hv24 webserver-deployment-c7997dcc8- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-c7997dcc8-5hv24 d93562ce-8b35-484d-b106-cd8b60d27357 9265105 0 2020-02-18 21:34:13 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0db357fd-1f89-4b58-bf34-7ccbac912fbf 0xc0027e1db7 0xc0027e1db8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.175: INFO: Pod "webserver-deployment-c7997dcc8-77rb9" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-77rb9 webserver-deployment-c7997dcc8- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-c7997dcc8-77rb9 ac689c64-7b50-4478-b25f-f3b0a8e50ca1 9265014 0 2020-02-18 21:34:03 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0db357fd-1f89-4b58-bf34-7ccbac912fbf 0xc0027e1ed7 0xc0027e1ed8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-18 21:34:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.175: INFO: Pod "webserver-deployment-c7997dcc8-k52mf" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k52mf webserver-deployment-c7997dcc8- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-c7997dcc8-k52mf 072672b5-d7ca-4912-a421-48d25aa1d7db 9265125 0 2020-02-18 21:34:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0db357fd-1f89-4b58-bf34-7ccbac912fbf 0xc00291a187 0xc00291a188}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-18 21:34:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.175: INFO: Pod "webserver-deployment-c7997dcc8-lh4dl" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lh4dl webserver-deployment-c7997dcc8- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-c7997dcc8-lh4dl 882c07ce-f39a-4194-b8ee-871de3d8b548 9265136 0 2020-02-18 21:34:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0db357fd-1f89-4b58-bf34-7ccbac912fbf 0xc00291a437 0xc00291a438}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-18 21:34:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.175: INFO: Pod "webserver-deployment-c7997dcc8-lm6mj" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lm6mj webserver-deployment-c7997dcc8- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-c7997dcc8-lm6mj c3fac1d7-2445-4591-be75-d7adbb7822b6 9265040 0 2020-02-18 21:34:06 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0db357fd-1f89-4b58-bf34-7ccbac912fbf 0xc00291a867 0xc00291a868}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-18 21:34:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.176: INFO: Pod "webserver-deployment-c7997dcc8-nr9fz" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nr9fz webserver-deployment-c7997dcc8- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-c7997dcc8-nr9fz 83060192-a3f9-4810-872d-35141927b51b 9265036 0 2020-02-18 21:34:06 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0db357fd-1f89-4b58-bf34-7ccbac912fbf 0xc00291abe7 0xc00291abe8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-18 21:34:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.176: INFO: Pod "webserver-deployment-c7997dcc8-r57gt" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-r57gt webserver-deployment-c7997dcc8- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-c7997dcc8-r57gt 1e9c1dd8-17e1-49c1-8edb-c507f15d65a5 9265132 0 2020-02-18 21:34:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0db357fd-1f89-4b58-bf34-7ccbac912fbf 0xc00291ae97 0xc00291ae98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-18 21:34:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.176: INFO: Pod "webserver-deployment-c7997dcc8-tkjsd" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tkjsd webserver-deployment-c7997dcc8- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-c7997dcc8-tkjsd 0117e6e2-4dbc-4b47-bc95-7ddff40426fd 9265031 0 2020-02-18 21:34:03 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0db357fd-1f89-4b58-bf34-7ccbac912fbf 0xc00291b007 0xc00291b008}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-18 21:34:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.176: INFO: Pod "webserver-deployment-c7997dcc8-vqw72" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vqw72 webserver-deployment-c7997dcc8- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-c7997dcc8-vqw72 d804a2fc-8939-431d-bb89-94adea889a49 9265037 0 2020-02-18 21:34:03 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0db357fd-1f89-4b58-bf34-7ccbac912fbf 0xc00291b177 0xc00291b178}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-18 21:34:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.176: INFO: Pod "webserver-deployment-c7997dcc8-xwxsx" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xwxsx webserver-deployment-c7997dcc8- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-c7997dcc8-xwxsx 548403ce-6150-4ff0-8b5d-69393ca84b58 9265089 0 2020-02-18 21:34:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0db357fd-1f89-4b58-bf34-7ccbac912fbf 0xc00291b2f7 0xc00291b2f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 21:34:26.176: INFO: Pod "webserver-deployment-c7997dcc8-z5pz9" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-z5pz9 webserver-deployment-c7997dcc8- deployment-6917 /api/v1/namespaces/deployment-6917/pods/webserver-deployment-c7997dcc8-z5pz9 0582cdbc-12fd-478e-8ca0-d32d3190dc42 9265095 0 2020-02-18 21:34:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0db357fd-1f89-4b58-bf34-7ccbac912fbf 0xc00291b427 0xc00291b428}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vp6kc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vp6kc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vp6kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 21:34:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:34:26.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6917" for this suite.

• [SLOW TEST:99.980 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":69,"skipped":1140,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:34:29.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-35d0352a-3364-4f25-9fc5-2148908eca96
STEP: Creating a pod to test consume configMaps
Feb 18 21:34:32.833: INFO: Waiting up to 5m0s for pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c" in namespace "configmap-5803" to be "success or failure"
Feb 18 21:34:33.024: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 190.973853ms
Feb 18 21:34:35.170: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.336893602s
Feb 18 21:34:38.226: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.392969163s
Feb 18 21:34:42.931: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.097871016s
Feb 18 21:34:44.947: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.113390567s
Feb 18 21:34:47.032: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.198470423s
Feb 18 21:34:49.577: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.743863167s
Feb 18 21:34:51.902: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.068703365s
Feb 18 21:34:54.415: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 21.581558922s
Feb 18 21:34:56.756: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 23.922067732s
Feb 18 21:34:59.553: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.719082264s
Feb 18 21:35:02.742: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 29.909017659s
Feb 18 21:35:05.337: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.503717497s
Feb 18 21:35:07.696: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 34.862356288s
Feb 18 21:35:10.187: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 37.35321661s
Feb 18 21:35:12.195: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 39.361387751s
Feb 18 21:35:14.200: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 41.36691263s
Feb 18 21:35:16.205: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 43.371338651s
Feb 18 21:35:18.210: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 45.376298085s
Feb 18 21:35:20.214: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 47.380474943s
Feb 18 21:35:22.219: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 49.38559461s
Feb 18 21:35:24.225: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 51.391869739s
Feb 18 21:35:26.231: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 53.397898177s
Feb 18 21:35:28.240: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 55.40702665s
Feb 18 21:35:30.375: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 57.541585245s
STEP: Saw pod success
Feb 18 21:35:30.375: INFO: Pod "pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c" satisfied condition "success or failure"
Feb 18 21:35:30.428: INFO: Trying to get logs from node jerma-node pod pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c container configmap-volume-test: 
STEP: delete the pod
Feb 18 21:35:30.654: INFO: Waiting for pod pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c to disappear
Feb 18 21:35:30.660: INFO: Pod pod-configmaps-5ce56ed5-5add-4142-8f90-c3b83b23fd1c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:35:30.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5803" for this suite.

• [SLOW TEST:61.303 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1140,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:35:30.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:35:42.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3865" for this suite.

• [SLOW TEST:12.238 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1171,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:35:42.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-a50ae298-aed7-4fe0-8164-45e9d8c5970a in namespace container-probe-6584
Feb 18 21:35:49.149: INFO: Started pod busybox-a50ae298-aed7-4fe0-8164-45e9d8c5970a in namespace container-probe-6584
STEP: checking the pod's current state and verifying that restartCount is present
Feb 18 21:35:49.153: INFO: Initial restart count of pod busybox-a50ae298-aed7-4fe0-8164-45e9d8c5970a is 0
Feb 18 21:36:39.654: INFO: Restart count of pod container-probe-6584/busybox-a50ae298-aed7-4fe0-8164-45e9d8c5970a is now 1 (50.500793397s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:36:39.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6584" for this suite.

• [SLOW TEST:56.834 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1176,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:36:39.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-2077c28d-2b1d-4b0d-8874-30d685d9e9ab
STEP: Creating a pod to test consume secrets
Feb 18 21:36:39.898: INFO: Waiting up to 5m0s for pod "pod-secrets-f7706572-ef0b-417b-9173-12f9227ceaf7" in namespace "secrets-3614" to be "success or failure"
Feb 18 21:36:39.937: INFO: Pod "pod-secrets-f7706572-ef0b-417b-9173-12f9227ceaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 39.77802ms
Feb 18 21:36:41.943: INFO: Pod "pod-secrets-f7706572-ef0b-417b-9173-12f9227ceaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04501425s
Feb 18 21:36:43.952: INFO: Pod "pod-secrets-f7706572-ef0b-417b-9173-12f9227ceaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05407684s
Feb 18 21:36:45.959: INFO: Pod "pod-secrets-f7706572-ef0b-417b-9173-12f9227ceaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06168917s
Feb 18 21:36:47.965: INFO: Pod "pod-secrets-f7706572-ef0b-417b-9173-12f9227ceaf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06747872s
STEP: Saw pod success
Feb 18 21:36:47.965: INFO: Pod "pod-secrets-f7706572-ef0b-417b-9173-12f9227ceaf7" satisfied condition "success or failure"
Feb 18 21:36:47.967: INFO: Trying to get logs from node jerma-node pod pod-secrets-f7706572-ef0b-417b-9173-12f9227ceaf7 container secret-volume-test: 
STEP: delete the pod
Feb 18 21:36:48.034: INFO: Waiting for pod pod-secrets-f7706572-ef0b-417b-9173-12f9227ceaf7 to disappear
Feb 18 21:36:48.049: INFO: Pod pod-secrets-f7706572-ef0b-417b-9173-12f9227ceaf7 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:36:48.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3614" for this suite.

• [SLOW TEST:8.306 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1199,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:36:48.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 18 21:36:56.692: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:36:57.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2303" for this suite.

• [SLOW TEST:9.303 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1211,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:36:57.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 18 21:36:57.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5532'
Feb 18 21:36:57.739: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 18 21:36:57.739: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
Feb 18 21:36:59.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5532'
Feb 18 21:37:00.087: INFO: stderr: ""
Feb 18 21:37:00.087: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:37:00.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5532" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":75,"skipped":1249,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:37:00.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Feb 18 21:37:00.201: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:37:00.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9773" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":76,"skipped":1264,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:37:00.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb 18 21:37:00.396: INFO: Waiting up to 5m0s for pod "downward-api-8dbf2f6d-9572-4c61-b216-bdf70e66c403" in namespace "downward-api-1556" to be "success or failure"
Feb 18 21:37:00.473: INFO: Pod "downward-api-8dbf2f6d-9572-4c61-b216-bdf70e66c403": Phase="Pending", Reason="", readiness=false. Elapsed: 76.780135ms
Feb 18 21:37:02.559: INFO: Pod "downward-api-8dbf2f6d-9572-4c61-b216-bdf70e66c403": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162773408s
Feb 18 21:37:04.569: INFO: Pod "downward-api-8dbf2f6d-9572-4c61-b216-bdf70e66c403": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172344477s
Feb 18 21:37:06.582: INFO: Pod "downward-api-8dbf2f6d-9572-4c61-b216-bdf70e66c403": Phase="Pending", Reason="", readiness=false. Elapsed: 6.185023189s
Feb 18 21:37:08.592: INFO: Pod "downward-api-8dbf2f6d-9572-4c61-b216-bdf70e66c403": Phase="Pending", Reason="", readiness=false. Elapsed: 8.195368692s
Feb 18 21:37:10.598: INFO: Pod "downward-api-8dbf2f6d-9572-4c61-b216-bdf70e66c403": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.201756646s
STEP: Saw pod success
Feb 18 21:37:10.598: INFO: Pod "downward-api-8dbf2f6d-9572-4c61-b216-bdf70e66c403" satisfied condition "success or failure"
Feb 18 21:37:10.601: INFO: Trying to get logs from node jerma-node pod downward-api-8dbf2f6d-9572-4c61-b216-bdf70e66c403 container dapi-container: 
STEP: delete the pod
Feb 18 21:37:10.685: INFO: Waiting for pod downward-api-8dbf2f6d-9572-4c61-b216-bdf70e66c403 to disappear
Feb 18 21:37:10.697: INFO: Pod downward-api-8dbf2f6d-9572-4c61-b216-bdf70e66c403 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:37:10.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1556" for this suite.

• [SLOW TEST:10.463 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1271,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:37:10.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-9780e179-35da-4ff2-a45e-b351c80e4142
STEP: Creating a pod to test consume secrets
Feb 18 21:37:11.292: INFO: Waiting up to 5m0s for pod "pod-secrets-47ddf802-9fab-497d-a4fa-e1f2713a8b15" in namespace "secrets-6318" to be "success or failure"
Feb 18 21:37:11.314: INFO: Pod "pod-secrets-47ddf802-9fab-497d-a4fa-e1f2713a8b15": Phase="Pending", Reason="", readiness=false. Elapsed: 21.234753ms
Feb 18 21:37:13.322: INFO: Pod "pod-secrets-47ddf802-9fab-497d-a4fa-e1f2713a8b15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029336419s
Feb 18 21:37:15.331: INFO: Pod "pod-secrets-47ddf802-9fab-497d-a4fa-e1f2713a8b15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038653326s
Feb 18 21:37:17.341: INFO: Pod "pod-secrets-47ddf802-9fab-497d-a4fa-e1f2713a8b15": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048741033s
Feb 18 21:37:19.354: INFO: Pod "pod-secrets-47ddf802-9fab-497d-a4fa-e1f2713a8b15": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061636883s
Feb 18 21:37:21.361: INFO: Pod "pod-secrets-47ddf802-9fab-497d-a4fa-e1f2713a8b15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068396047s
STEP: Saw pod success
Feb 18 21:37:21.361: INFO: Pod "pod-secrets-47ddf802-9fab-497d-a4fa-e1f2713a8b15" satisfied condition "success or failure"
Feb 18 21:37:21.365: INFO: Trying to get logs from node jerma-node pod pod-secrets-47ddf802-9fab-497d-a4fa-e1f2713a8b15 container secret-volume-test: 
STEP: delete the pod
Feb 18 21:37:22.433: INFO: Waiting for pod pod-secrets-47ddf802-9fab-497d-a4fa-e1f2713a8b15 to disappear
Feb 18 21:37:22.449: INFO: Pod pod-secrets-47ddf802-9fab-497d-a4fa-e1f2713a8b15 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:37:22.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6318" for this suite.
STEP: Destroying namespace "secret-namespace-9387" for this suite.

• [SLOW TEST:11.862 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1281,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:37:22.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 18 21:37:23.717: INFO: Pod name wrapped-volume-race-0a396b80-9ed6-4e4b-9c9e-8b105526908c: Found 0 pods out of 5
Feb 18 21:37:28.734: INFO: Pod name wrapped-volume-race-0a396b80-9ed6-4e4b-9c9e-8b105526908c: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-0a396b80-9ed6-4e4b-9c9e-8b105526908c in namespace emptydir-wrapper-5417, will wait for the garbage collector to delete the pods
Feb 18 21:38:03.183: INFO: Deleting ReplicationController wrapped-volume-race-0a396b80-9ed6-4e4b-9c9e-8b105526908c took: 50.897749ms
Feb 18 21:38:03.584: INFO: Terminating ReplicationController wrapped-volume-race-0a396b80-9ed6-4e4b-9c9e-8b105526908c pods took: 401.056167ms
STEP: Creating RC which spawns configmap-volume pods
Feb 18 21:38:23.152: INFO: Pod name wrapped-volume-race-79f7a2f7-b475-48f8-8e50-ce8a9ad83f51: Found 0 pods out of 5
Feb 18 21:38:28.184: INFO: Pod name wrapped-volume-race-79f7a2f7-b475-48f8-8e50-ce8a9ad83f51: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-79f7a2f7-b475-48f8-8e50-ce8a9ad83f51 in namespace emptydir-wrapper-5417, will wait for the garbage collector to delete the pods
Feb 18 21:38:58.496: INFO: Deleting ReplicationController wrapped-volume-race-79f7a2f7-b475-48f8-8e50-ce8a9ad83f51 took: 68.055006ms
Feb 18 21:38:59.097: INFO: Terminating ReplicationController wrapped-volume-race-79f7a2f7-b475-48f8-8e50-ce8a9ad83f51 pods took: 601.095305ms
STEP: Creating RC which spawns configmap-volume pods
Feb 18 21:39:23.374: INFO: Pod name wrapped-volume-race-d7916b6a-9ba9-4977-a935-ea42aed63976: Found 0 pods out of 5
Feb 18 21:39:28.391: INFO: Pod name wrapped-volume-race-d7916b6a-9ba9-4977-a935-ea42aed63976: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-d7916b6a-9ba9-4977-a935-ea42aed63976 in namespace emptydir-wrapper-5417, will wait for the garbage collector to delete the pods
Feb 18 21:39:54.640: INFO: Deleting ReplicationController wrapped-volume-race-d7916b6a-9ba9-4977-a935-ea42aed63976 took: 12.479682ms
Feb 18 21:39:55.141: INFO: Terminating ReplicationController wrapped-volume-race-d7916b6a-9ba9-4977-a935-ea42aed63976 pods took: 501.022334ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:40:13.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-5417" for this suite.

• [SLOW TEST:171.154 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":79,"skipped":1303,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:40:13.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-7276eac0-b939-4b07-ac52-eb43adbda2cf in namespace container-probe-5500
Feb 18 21:40:28.507: INFO: Started pod liveness-7276eac0-b939-4b07-ac52-eb43adbda2cf in namespace container-probe-5500
STEP: checking the pod's current state and verifying that restartCount is present
Feb 18 21:40:28.512: INFO: Initial restart count of pod liveness-7276eac0-b939-4b07-ac52-eb43adbda2cf is 0
Feb 18 21:40:54.784: INFO: Restart count of pod container-probe-5500/liveness-7276eac0-b939-4b07-ac52-eb43adbda2cf is now 1 (26.271847921s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:40:54.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5500" for this suite.

• [SLOW TEST:41.137 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1334,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:40:54.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 21:40:55.721: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 18 21:40:57.737: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658855, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658855, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658855, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658855, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:40:59.742: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658855, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658855, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658855, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658855, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:41:01.750: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658855, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658855, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658855, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658855, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:41:03.744: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658855, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658855, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658855, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717658855, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 21:41:06.799: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:41:07.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1587" for this suite.
STEP: Destroying namespace "webhook-1587-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.542 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":81,"skipped":1360,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:41:07.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-e8556715-4702-4349-af32-2769cf9ed42c in namespace container-probe-8348
Feb 18 21:41:15.577: INFO: Started pod test-webserver-e8556715-4702-4349-af32-2769cf9ed42c in namespace container-probe-8348
STEP: checking the pod's current state and verifying that restartCount is present
Feb 18 21:41:15.585: INFO: Initial restart count of pod test-webserver-e8556715-4702-4349-af32-2769cf9ed42c is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:45:16.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8348" for this suite.

• [SLOW TEST:249.500 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1385,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:45:16.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 18 21:45:17.078: INFO: Waiting up to 5m0s for pod "downwardapi-volume-45a5789d-f7f0-406b-8734-6b87a7a5ec21" in namespace "downward-api-8208" to be "success or failure"
Feb 18 21:45:17.087: INFO: Pod "downwardapi-volume-45a5789d-f7f0-406b-8734-6b87a7a5ec21": Phase="Pending", Reason="", readiness=false. Elapsed: 9.006917ms
Feb 18 21:45:19.093: INFO: Pod "downwardapi-volume-45a5789d-f7f0-406b-8734-6b87a7a5ec21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014827186s
Feb 18 21:45:21.099: INFO: Pod "downwardapi-volume-45a5789d-f7f0-406b-8734-6b87a7a5ec21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020220165s
Feb 18 21:45:23.111: INFO: Pod "downwardapi-volume-45a5789d-f7f0-406b-8734-6b87a7a5ec21": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032982665s
Feb 18 21:45:25.116: INFO: Pod "downwardapi-volume-45a5789d-f7f0-406b-8734-6b87a7a5ec21": Phase="Pending", Reason="", readiness=false. Elapsed: 8.037881459s
Feb 18 21:45:27.124: INFO: Pod "downwardapi-volume-45a5789d-f7f0-406b-8734-6b87a7a5ec21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.046048226s
STEP: Saw pod success
Feb 18 21:45:27.125: INFO: Pod "downwardapi-volume-45a5789d-f7f0-406b-8734-6b87a7a5ec21" satisfied condition "success or failure"
Feb 18 21:45:27.132: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-45a5789d-f7f0-406b-8734-6b87a7a5ec21 container client-container: 
STEP: delete the pod
Feb 18 21:45:27.579: INFO: Waiting for pod downwardapi-volume-45a5789d-f7f0-406b-8734-6b87a7a5ec21 to disappear
Feb 18 21:45:27.585: INFO: Pod downwardapi-volume-45a5789d-f7f0-406b-8734-6b87a7a5ec21 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:45:27.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8208" for this suite.

• [SLOW TEST:10.650 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1389,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:45:27.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 18 21:45:27.726: INFO: Waiting up to 5m0s for pod "pod-b8b021d8-3419-45ce-ae82-32a64d6fa6d5" in namespace "emptydir-2493" to be "success or failure"
Feb 18 21:45:27.750: INFO: Pod "pod-b8b021d8-3419-45ce-ae82-32a64d6fa6d5": Phase="Pending", Reason="", readiness=false. Elapsed: 23.857425ms
Feb 18 21:45:29.763: INFO: Pod "pod-b8b021d8-3419-45ce-ae82-32a64d6fa6d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037454556s
Feb 18 21:45:31.771: INFO: Pod "pod-b8b021d8-3419-45ce-ae82-32a64d6fa6d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045070114s
Feb 18 21:45:33.788: INFO: Pod "pod-b8b021d8-3419-45ce-ae82-32a64d6fa6d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06170947s
Feb 18 21:45:35.809: INFO: Pod "pod-b8b021d8-3419-45ce-ae82-32a64d6fa6d5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082940061s
Feb 18 21:45:37.817: INFO: Pod "pod-b8b021d8-3419-45ce-ae82-32a64d6fa6d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090811654s
STEP: Saw pod success
Feb 18 21:45:37.817: INFO: Pod "pod-b8b021d8-3419-45ce-ae82-32a64d6fa6d5" satisfied condition "success or failure"
Feb 18 21:45:37.824: INFO: Trying to get logs from node jerma-node pod pod-b8b021d8-3419-45ce-ae82-32a64d6fa6d5 container test-container: 
STEP: delete the pod
Feb 18 21:45:37.917: INFO: Waiting for pod pod-b8b021d8-3419-45ce-ae82-32a64d6fa6d5 to disappear
Feb 18 21:45:37.928: INFO: Pod pod-b8b021d8-3419-45ce-ae82-32a64d6fa6d5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:45:37.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2493" for this suite.

• [SLOW TEST:10.335 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1408,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:45:37.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 21:45:38.006: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:45:38.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9263" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":85,"skipped":1460,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:45:38.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1672
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 18 21:45:39.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-4146'
Feb 18 21:45:41.363: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 18 21:45:41.363: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Feb 18 21:45:41.401: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb 18 21:45:41.429: INFO: scanned /root for discovery docs: 
Feb 18 21:45:41.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4146'
Feb 18 21:46:03.577: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 18 21:46:03.577: INFO: stdout: "Created e2e-test-httpd-rc-62db118862e2a3a6ad5e74de05320a24\nScaling up e2e-test-httpd-rc-62db118862e2a3a6ad5e74de05320a24 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-62db118862e2a3a6ad5e74de05320a24 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-62db118862e2a3a6ad5e74de05320a24 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Feb 18 21:46:03.577: INFO: stdout: "Created e2e-test-httpd-rc-62db118862e2a3a6ad5e74de05320a24\nScaling up e2e-test-httpd-rc-62db118862e2a3a6ad5e74de05320a24 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-62db118862e2a3a6ad5e74de05320a24 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-62db118862e2a3a6ad5e74de05320a24 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Feb 18 21:46:03.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-4146'
Feb 18 21:46:03.817: INFO: stderr: ""
Feb 18 21:46:03.817: INFO: stdout: "e2e-test-httpd-rc-62db118862e2a3a6ad5e74de05320a24-vdg6c "
Feb 18 21:46:03.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-62db118862e2a3a6ad5e74de05320a24-vdg6c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4146'
Feb 18 21:46:03.953: INFO: stderr: ""
Feb 18 21:46:03.954: INFO: stdout: "true"
Feb 18 21:46:03.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-62db118862e2a3a6ad5e74de05320a24-vdg6c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4146'
Feb 18 21:46:04.111: INFO: stderr: ""
Feb 18 21:46:04.111: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Feb 18 21:46:04.111: INFO: e2e-test-httpd-rc-62db118862e2a3a6ad5e74de05320a24-vdg6c is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1678
Feb 18 21:46:04.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-4146'
Feb 18 21:46:04.256: INFO: stderr: ""
Feb 18 21:46:04.256: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:46:04.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4146" for this suite.

• [SLOW TEST:25.561 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1667
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":278,"completed":86,"skipped":1465,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:46:04.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1713
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 18 21:46:04.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-5176'
Feb 18 21:46:04.552: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 18 21:46:04.553: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1718
Feb 18 21:46:08.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5176'
Feb 18 21:46:08.770: INFO: stderr: ""
Feb 18 21:46:08.770: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:46:08.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5176" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":278,"completed":87,"skipped":1476,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:46:08.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:46:17.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3085" for this suite.

• [SLOW TEST:8.543 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":88,"skipped":1481,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:46:17.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-d07d3626-587b-4d57-b01c-d5304cc498b5
STEP: Creating a pod to test consume secrets
Feb 18 21:46:17.612: INFO: Waiting up to 5m0s for pod "pod-secrets-7e481fba-b526-43f4-914f-6e66e54503a5" in namespace "secrets-6798" to be "success or failure"
Feb 18 21:46:17.616: INFO: Pod "pod-secrets-7e481fba-b526-43f4-914f-6e66e54503a5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.318099ms
Feb 18 21:46:19.621: INFO: Pod "pod-secrets-7e481fba-b526-43f4-914f-6e66e54503a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009001143s
Feb 18 21:46:21.641: INFO: Pod "pod-secrets-7e481fba-b526-43f4-914f-6e66e54503a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028949316s
Feb 18 21:46:23.652: INFO: Pod "pod-secrets-7e481fba-b526-43f4-914f-6e66e54503a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039681398s
Feb 18 21:46:25.659: INFO: Pod "pod-secrets-7e481fba-b526-43f4-914f-6e66e54503a5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046437279s
Feb 18 21:46:27.724: INFO: Pod "pod-secrets-7e481fba-b526-43f4-914f-6e66e54503a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.111776047s
STEP: Saw pod success
Feb 18 21:46:27.724: INFO: Pod "pod-secrets-7e481fba-b526-43f4-914f-6e66e54503a5" satisfied condition "success or failure"
Feb 18 21:46:27.730: INFO: Trying to get logs from node jerma-node pod pod-secrets-7e481fba-b526-43f4-914f-6e66e54503a5 container secret-env-test: 
STEP: delete the pod
Feb 18 21:46:27.777: INFO: Waiting for pod pod-secrets-7e481fba-b526-43f4-914f-6e66e54503a5 to disappear
Feb 18 21:46:27.938: INFO: Pod pod-secrets-7e481fba-b526-43f4-914f-6e66e54503a5 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:46:27.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6798" for this suite.

• [SLOW TEST:10.493 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1492,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:46:27.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 21:46:29.633: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 18 21:46:31.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659189, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659189, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659189, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659189, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:46:33.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659189, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659189, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659189, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659189, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:46:35.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659189, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659189, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659189, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659189, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 21:46:38.692: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Feb 18 21:46:46.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-9109 to-be-attached-pod -i -c=container1'
Feb 18 21:46:47.003: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:46:47.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9109" for this suite.
STEP: Destroying namespace "webhook-9109-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:19.241 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":90,"skipped":1508,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:46:47.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-8bc98291-d62e-4ee6-b29d-ce75138918cd
STEP: Creating a pod to test consume secrets
Feb 18 21:46:47.400: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-06e80da4-c2ef-409f-9931-4fa4dcee51c2" in namespace "projected-1810" to be "success or failure"
Feb 18 21:46:47.405: INFO: Pod "pod-projected-secrets-06e80da4-c2ef-409f-9931-4fa4dcee51c2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.061587ms
Feb 18 21:46:49.412: INFO: Pod "pod-projected-secrets-06e80da4-c2ef-409f-9931-4fa4dcee51c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012391722s
Feb 18 21:46:51.420: INFO: Pod "pod-projected-secrets-06e80da4-c2ef-409f-9931-4fa4dcee51c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020425977s
Feb 18 21:46:53.460: INFO: Pod "pod-projected-secrets-06e80da4-c2ef-409f-9931-4fa4dcee51c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059856846s
Feb 18 21:46:55.466: INFO: Pod "pod-projected-secrets-06e80da4-c2ef-409f-9931-4fa4dcee51c2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065884637s
Feb 18 21:46:57.489: INFO: Pod "pod-projected-secrets-06e80da4-c2ef-409f-9931-4fa4dcee51c2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.089424535s
Feb 18 21:47:00.073: INFO: Pod "pod-projected-secrets-06e80da4-c2ef-409f-9931-4fa4dcee51c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.673682372s
STEP: Saw pod success
Feb 18 21:47:00.074: INFO: Pod "pod-projected-secrets-06e80da4-c2ef-409f-9931-4fa4dcee51c2" satisfied condition "success or failure"
Feb 18 21:47:00.081: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-06e80da4-c2ef-409f-9931-4fa4dcee51c2 container projected-secret-volume-test: 
STEP: delete the pod
Feb 18 21:47:00.449: INFO: Waiting for pod pod-projected-secrets-06e80da4-c2ef-409f-9931-4fa4dcee51c2 to disappear
Feb 18 21:47:00.458: INFO: Pod pod-projected-secrets-06e80da4-c2ef-409f-9931-4fa4dcee51c2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:47:00.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1810" for this suite.

• [SLOW TEST:13.260 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1549,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:47:00.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 21:47:00.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Feb 18 21:47:01.249: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-18T21:47:01Z generation:1 name:name1 resourceVersion:9268526 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6ca6f351-26a5-475a-b4b2-fbb4a991be4b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Feb 18 21:47:11.259: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-18T21:47:11Z generation:1 name:name2 resourceVersion:9268564 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:206cafef-950a-4456-8ad3-58961d2de67b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Feb 18 21:47:21.271: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-18T21:47:01Z generation:2 name:name1 resourceVersion:9268590 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6ca6f351-26a5-475a-b4b2-fbb4a991be4b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Feb 18 21:47:32.546: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-18T21:47:11Z generation:2 name:name2 resourceVersion:9268610 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:206cafef-950a-4456-8ad3-58961d2de67b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Feb 18 21:47:42.577: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-18T21:47:01Z generation:2 name:name1 resourceVersion:9268635 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6ca6f351-26a5-475a-b4b2-fbb4a991be4b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Feb 18 21:47:52.594: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-18T21:47:11Z generation:2 name:name2 resourceVersion:9268659 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:206cafef-950a-4456-8ad3-58961d2de67b] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:48:03.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-2774" for this suite.

• [SLOW TEST:62.647 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":92,"skipped":1560,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:48:03.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Feb 18 21:48:03.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-19 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 18 21:48:11.040: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0218 21:48:09.756229    1338 log.go:172] (0xc000a306e0) (0xc0007cc0a0) Create stream\nI0218 21:48:09.756471    1338 log.go:172] (0xc000a306e0) (0xc0007cc0a0) Stream added, broadcasting: 1\nI0218 21:48:09.761145    1338 log.go:172] (0xc000a306e0) Reply frame received for 1\nI0218 21:48:09.761182    1338 log.go:172] (0xc000a306e0) (0xc000a08140) Create stream\nI0218 21:48:09.761199    1338 log.go:172] (0xc000a306e0) (0xc000a08140) Stream added, broadcasting: 3\nI0218 21:48:09.763007    1338 log.go:172] (0xc000a306e0) Reply frame received for 3\nI0218 21:48:09.763054    1338 log.go:172] (0xc000a306e0) (0xc0007cc140) Create stream\nI0218 21:48:09.763072    1338 log.go:172] (0xc000a306e0) (0xc0007cc140) Stream added, broadcasting: 5\nI0218 21:48:09.765016    1338 log.go:172] (0xc000a306e0) Reply frame received for 5\nI0218 21:48:09.765097    1338 log.go:172] (0xc000a306e0) (0xc000814000) Create stream\nI0218 21:48:09.765107    1338 log.go:172] (0xc000a306e0) (0xc000814000) Stream added, broadcasting: 7\nI0218 21:48:09.767563    1338 log.go:172] (0xc000a306e0) Reply frame received for 7\nI0218 21:48:09.768162    1338 log.go:172] (0xc000a08140) (3) Writing data frame\nI0218 21:48:09.768333    1338 log.go:172] (0xc000a08140) (3) Writing data frame\nI0218 21:48:09.775679    1338 log.go:172] (0xc000a306e0) Data frame received for 5\nI0218 21:48:09.775761    1338 log.go:172] (0xc0007cc140) (5) Data frame handling\nI0218 21:48:09.775783    1338 log.go:172] (0xc0007cc140) (5) Data frame sent\nI0218 21:48:09.793039    1338 log.go:172] (0xc000a306e0) Data frame received for 5\nI0218 21:48:09.793105    1338 log.go:172] (0xc0007cc140) (5) Data frame handling\nI0218 21:48:09.793123    1338 log.go:172] (0xc0007cc140) (5) Data frame sent\nI0218 21:48:10.972112    1338 log.go:172] (0xc000a306e0) Data frame received for 1\nI0218 21:48:10.972291    1338 log.go:172] (0xc000a306e0) (0xc000814000) Stream removed, broadcasting: 7\nI0218 21:48:10.972342    1338 log.go:172] (0xc0007cc0a0) (1) Data frame handling\nI0218 21:48:10.972380    1338 log.go:172] (0xc000a306e0) (0xc0007cc140) Stream removed, broadcasting: 5\nI0218 21:48:10.972422    1338 log.go:172] (0xc0007cc0a0) (1) Data frame sent\nI0218 21:48:10.972435    1338 log.go:172] (0xc000a306e0) (0xc000a08140) Stream removed, broadcasting: 3\nI0218 21:48:10.972509    1338 log.go:172] (0xc000a306e0) (0xc0007cc0a0) Stream removed, broadcasting: 1\nI0218 21:48:10.972536    1338 log.go:172] (0xc000a306e0) Go away received\nI0218 21:48:10.973660    1338 log.go:172] (0xc000a306e0) (0xc0007cc0a0) Stream removed, broadcasting: 1\nI0218 21:48:10.973692    1338 log.go:172] (0xc000a306e0) (0xc000a08140) Stream removed, broadcasting: 3\nI0218 21:48:10.973712    1338 log.go:172] (0xc000a306e0) (0xc0007cc140) Stream removed, broadcasting: 5\nI0218 21:48:10.973736    1338 log.go:172] (0xc000a306e0) (0xc000814000) Stream removed, broadcasting: 7\n"
Feb 18 21:48:11.040: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:48:13.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-19" for this suite.

• [SLOW TEST:9.928 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1924
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":278,"completed":93,"skipped":1560,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:48:13.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 18 21:48:13.132: INFO: Waiting up to 5m0s for pod "pod-581b9523-d2a8-431d-b945-bc5eff6d3ef4" in namespace "emptydir-6421" to be "success or failure"
Feb 18 21:48:13.230: INFO: Pod "pod-581b9523-d2a8-431d-b945-bc5eff6d3ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 98.140812ms
Feb 18 21:48:15.242: INFO: Pod "pod-581b9523-d2a8-431d-b945-bc5eff6d3ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109759818s
Feb 18 21:48:17.250: INFO: Pod "pod-581b9523-d2a8-431d-b945-bc5eff6d3ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117528824s
Feb 18 21:48:19.257: INFO: Pod "pod-581b9523-d2a8-431d-b945-bc5eff6d3ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125040381s
Feb 18 21:48:21.267: INFO: Pod "pod-581b9523-d2a8-431d-b945-bc5eff6d3ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134838656s
Feb 18 21:48:23.286: INFO: Pod "pod-581b9523-d2a8-431d-b945-bc5eff6d3ef4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.154289547s
STEP: Saw pod success
Feb 18 21:48:23.286: INFO: Pod "pod-581b9523-d2a8-431d-b945-bc5eff6d3ef4" satisfied condition "success or failure"
Feb 18 21:48:23.292: INFO: Trying to get logs from node jerma-node pod pod-581b9523-d2a8-431d-b945-bc5eff6d3ef4 container test-container: 
STEP: delete the pod
Feb 18 21:48:23.357: INFO: Waiting for pod pod-581b9523-d2a8-431d-b945-bc5eff6d3ef4 to disappear
Feb 18 21:48:23.363: INFO: Pod pod-581b9523-d2a8-431d-b945-bc5eff6d3ef4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:48:23.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6421" for this suite.

• [SLOW TEST:10.329 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1562,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:48:23.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb 18 21:48:23.900: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Feb 18 21:48:25.918: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659303, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659303, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659303, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659303, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:48:27.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659303, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659303, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659303, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659303, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:48:29.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659303, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659303, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659303, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659303, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 21:48:32.997: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 21:48:33.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:48:34.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-3823" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:11.331 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":95,"skipped":1568,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:48:34.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-bd1c1922-2f84-4724-9d76-f68017789a1b
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-bd1c1922-2f84-4724-9d76-f68017789a1b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:49:56.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8499" for this suite.

• [SLOW TEST:81.535 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1585,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:49:56.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Feb 18 21:49:56.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Feb 18 21:50:09.977: INFO: >>> kubeConfig: /root/.kube/config
Feb 18 21:50:12.985: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:50:26.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2450" for this suite.

• [SLOW TEST:30.265 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":97,"skipped":1603,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:50:26.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 21:50:26.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb 18 21:50:29.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9609 create -f -'
Feb 18 21:50:32.417: INFO: stderr: ""
Feb 18 21:50:32.417: INFO: stdout: "e2e-test-crd-publish-openapi-3755-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Feb 18 21:50:32.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9609 delete e2e-test-crd-publish-openapi-3755-crds test-cr'
Feb 18 21:50:32.589: INFO: stderr: ""
Feb 18 21:50:32.590: INFO: stdout: "e2e-test-crd-publish-openapi-3755-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Feb 18 21:50:32.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9609 apply -f -'
Feb 18 21:50:33.084: INFO: stderr: ""
Feb 18 21:50:33.084: INFO: stdout: "e2e-test-crd-publish-openapi-3755-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Feb 18 21:50:33.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9609 delete e2e-test-crd-publish-openapi-3755-crds test-cr'
Feb 18 21:50:33.400: INFO: stderr: ""
Feb 18 21:50:33.400: INFO: stdout: "e2e-test-crd-publish-openapi-3755-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Feb 18 21:50:33.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3755-crds'
Feb 18 21:50:33.688: INFO: stderr: ""
Feb 18 21:50:33.688: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3755-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:50:35.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9609" for this suite.

• [SLOW TEST:9.157 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":98,"skipped":1613,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:50:35.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 18 21:50:35.802: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4d2d5e3d-afe5-4744-8a3d-16fa7dca0533" in namespace "downward-api-6040" to be "success or failure"
Feb 18 21:50:35.809: INFO: Pod "downwardapi-volume-4d2d5e3d-afe5-4744-8a3d-16fa7dca0533": Phase="Pending", Reason="", readiness=false. Elapsed: 7.431391ms
Feb 18 21:50:37.821: INFO: Pod "downwardapi-volume-4d2d5e3d-afe5-4744-8a3d-16fa7dca0533": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019283108s
Feb 18 21:50:39.859: INFO: Pod "downwardapi-volume-4d2d5e3d-afe5-4744-8a3d-16fa7dca0533": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057056708s
Feb 18 21:50:41.869: INFO: Pod "downwardapi-volume-4d2d5e3d-afe5-4744-8a3d-16fa7dca0533": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066942804s
Feb 18 21:50:43.876: INFO: Pod "downwardapi-volume-4d2d5e3d-afe5-4744-8a3d-16fa7dca0533": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074487005s
STEP: Saw pod success
Feb 18 21:50:43.876: INFO: Pod "downwardapi-volume-4d2d5e3d-afe5-4744-8a3d-16fa7dca0533" satisfied condition "success or failure"
Feb 18 21:50:43.880: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-4d2d5e3d-afe5-4744-8a3d-16fa7dca0533 container client-container: 
STEP: delete the pod
Feb 18 21:50:43.972: INFO: Waiting for pod downwardapi-volume-4d2d5e3d-afe5-4744-8a3d-16fa7dca0533 to disappear
Feb 18 21:50:43.980: INFO: Pod downwardapi-volume-4d2d5e3d-afe5-4744-8a3d-16fa7dca0533 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:50:43.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6040" for this suite.

• [SLOW TEST:8.311 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1616,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:50:43.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-5272
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Feb 18 21:50:44.285: INFO: Found 0 stateful pods, waiting for 3
Feb 18 21:50:54.373: INFO: Found 2 stateful pods, waiting for 3
Feb 18 21:51:04.293: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 21:51:04.293: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 21:51:04.293: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 18 21:51:14.294: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 21:51:14.294: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 21:51:14.294: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb 18 21:51:14.331: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 18 21:51:24.434: INFO: Updating stateful set ss2
Feb 18 21:51:24.446: INFO: Waiting for Pod statefulset-5272/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Feb 18 21:51:34.717: INFO: Found 2 stateful pods, waiting for 3
Feb 18 21:51:44.724: INFO: Found 2 stateful pods, waiting for 3
Feb 18 21:51:54.725: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 21:51:54.725: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 21:51:54.725: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 18 21:51:54.752: INFO: Updating stateful set ss2
Feb 18 21:51:54.804: INFO: Waiting for Pod statefulset-5272/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 18 21:52:05.487: INFO: Updating stateful set ss2
Feb 18 21:52:05.531: INFO: Waiting for StatefulSet statefulset-5272/ss2 to complete update
Feb 18 21:52:05.531: INFO: Waiting for Pod statefulset-5272/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 18 21:52:15.548: INFO: Waiting for StatefulSet statefulset-5272/ss2 to complete update
Feb 18 21:52:15.548: INFO: Waiting for Pod statefulset-5272/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 18 21:52:25.556: INFO: Waiting for StatefulSet statefulset-5272/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb 18 21:52:35.567: INFO: Deleting all statefulset in ns statefulset-5272
Feb 18 21:52:35.572: INFO: Scaling statefulset ss2 to 0
Feb 18 21:53:05.601: INFO: Waiting for statefulset status.replicas updated to 0
Feb 18 21:53:05.610: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:53:05.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5272" for this suite.

• [SLOW TEST:141.660 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":100,"skipped":1623,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:53:05.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 21:53:06.430: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 18 21:53:08.448: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659586, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659586, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:53:10.457: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659586, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659586, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:53:12.455: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659586, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659586, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 21:53:15.552: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:53:15.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7120" for this suite.
STEP: Destroying namespace "webhook-7120-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.365 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":101,"skipped":1634,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:53:16.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-9997, will wait for the garbage collector to delete the pods
Feb 18 21:53:32.350: INFO: Deleting Job.batch foo took: 26.569216ms
Feb 18 21:53:32.650: INFO: Terminating Job.batch foo pods took: 300.590238ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:54:12.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9997" for this suite.

• [SLOW TEST:56.456 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":102,"skipped":1641,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:54:12.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-2552
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-2552
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2552
Feb 18 21:54:12.619: INFO: Found 0 stateful pods, waiting for 1
Feb 18 21:54:22.628: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 18 21:54:22.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2552 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 18 21:54:23.116: INFO: stderr: "I0218 21:54:22.827410    1476 log.go:172] (0xc0009a20b0) (0xc0006f9ae0) Create stream\nI0218 21:54:22.827562    1476 log.go:172] (0xc0009a20b0) (0xc0006f9ae0) Stream added, broadcasting: 1\nI0218 21:54:22.832374    1476 log.go:172] (0xc0009a20b0) Reply frame received for 1\nI0218 21:54:22.832432    1476 log.go:172] (0xc0009a20b0) (0xc000994000) Create stream\nI0218 21:54:22.832449    1476 log.go:172] (0xc0009a20b0) (0xc000994000) Stream added, broadcasting: 3\nI0218 21:54:22.834254    1476 log.go:172] (0xc0009a20b0) Reply frame received for 3\nI0218 21:54:22.834283    1476 log.go:172] (0xc0009a20b0) (0xc0006f9cc0) Create stream\nI0218 21:54:22.834293    1476 log.go:172] (0xc0009a20b0) (0xc0006f9cc0) Stream added, broadcasting: 5\nI0218 21:54:22.838272    1476 log.go:172] (0xc0009a20b0) Reply frame received for 5\nI0218 21:54:22.953075    1476 log.go:172] (0xc0009a20b0) Data frame received for 5\nI0218 21:54:22.953171    1476 log.go:172] (0xc0006f9cc0) (5) Data frame handling\nI0218 21:54:22.953198    1476 log.go:172] (0xc0006f9cc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0218 21:54:22.976511    1476 log.go:172] (0xc0009a20b0) Data frame received for 3\nI0218 21:54:22.976558    1476 log.go:172] (0xc000994000) (3) Data frame handling\nI0218 21:54:22.976573    1476 log.go:172] (0xc000994000) (3) Data frame sent\nI0218 21:54:23.108457    1476 log.go:172] (0xc0009a20b0) Data frame received for 1\nI0218 21:54:23.108635    1476 log.go:172] (0xc0006f9ae0) (1) Data frame handling\nI0218 21:54:23.108660    1476 log.go:172] (0xc0006f9ae0) (1) Data frame sent\nI0218 21:54:23.108677    1476 log.go:172] (0xc0009a20b0) (0xc0006f9ae0) Stream removed, broadcasting: 1\nI0218 21:54:23.109176    1476 log.go:172] (0xc0009a20b0) (0xc000994000) Stream removed, broadcasting: 3\nI0218 21:54:23.109220    1476 log.go:172] (0xc0009a20b0) (0xc0006f9cc0) Stream removed, broadcasting: 5\nI0218 21:54:23.109241    1476 log.go:172] (0xc0009a20b0) (0xc0006f9ae0) Stream removed, broadcasting: 1\nI0218 21:54:23.109249    1476 log.go:172] (0xc0009a20b0) (0xc000994000) Stream removed, broadcasting: 3\nI0218 21:54:23.109257    1476 log.go:172] (0xc0009a20b0) (0xc0006f9cc0) Stream removed, broadcasting: 5\nI0218 21:54:23.109353    1476 log.go:172] (0xc0009a20b0) Go away received\n"
Feb 18 21:54:23.116: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 18 21:54:23.116: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 18 21:54:23.160: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 18 21:54:23.160: INFO: Waiting for statefulset status.replicas updated to 0
Feb 18 21:54:23.197: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999826s
Feb 18 21:54:24.204: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.983649116s
Feb 18 21:54:25.212: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.97704356s
Feb 18 21:54:26.219: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.96960663s
Feb 18 21:54:27.224: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.962507462s
Feb 18 21:54:28.233: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.957324669s
Feb 18 21:54:29.240: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.948198574s
Feb 18 21:54:30.248: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.940833831s
Feb 18 21:54:31.256: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.933750509s
Feb 18 21:54:32.269: INFO: Verifying statefulset ss doesn't scale past 1 for another 925.597956ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2552
Feb 18 21:54:33.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2552 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 21:54:33.775: INFO: stderr: "I0218 21:54:33.503514    1498 log.go:172] (0xc0007249a0) (0xc00094a1e0) Create stream\nI0218 21:54:33.503704    1498 log.go:172] (0xc0007249a0) (0xc00094a1e0) Stream added, broadcasting: 1\nI0218 21:54:33.507533    1498 log.go:172] (0xc0007249a0) Reply frame received for 1\nI0218 21:54:33.507580    1498 log.go:172] (0xc0007249a0) (0xc00094a280) Create stream\nI0218 21:54:33.507598    1498 log.go:172] (0xc0007249a0) (0xc00094a280) Stream added, broadcasting: 3\nI0218 21:54:33.509332    1498 log.go:172] (0xc0007249a0) Reply frame received for 3\nI0218 21:54:33.509397    1498 log.go:172] (0xc0007249a0) (0xc00094a320) Create stream\nI0218 21:54:33.509411    1498 log.go:172] (0xc0007249a0) (0xc00094a320) Stream added, broadcasting: 5\nI0218 21:54:33.511601    1498 log.go:172] (0xc0007249a0) Reply frame received for 5\nI0218 21:54:33.623721    1498 log.go:172] (0xc0007249a0) Data frame received for 5\nI0218 21:54:33.624067    1498 log.go:172] (0xc00094a320) (5) Data frame handling\nI0218 21:54:33.624162    1498 log.go:172] (0xc00094a320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0218 21:54:33.624341    1498 log.go:172] (0xc0007249a0) Data frame received for 3\nI0218 21:54:33.624372    1498 log.go:172] (0xc00094a280) (3) Data frame handling\nI0218 21:54:33.624413    1498 log.go:172] (0xc00094a280) (3) Data frame sent\nI0218 21:54:33.750373    1498 log.go:172] (0xc0007249a0) (0xc00094a280) Stream removed, broadcasting: 3\nI0218 21:54:33.750588    1498 log.go:172] (0xc0007249a0) Data frame received for 1\nI0218 21:54:33.750657    1498 log.go:172] (0xc0007249a0) (0xc00094a320) Stream removed, broadcasting: 5\nI0218 21:54:33.750730    1498 log.go:172] (0xc00094a1e0) (1) Data frame handling\nI0218 21:54:33.750772    1498 log.go:172] (0xc00094a1e0) (1) Data frame sent\nI0218 21:54:33.750793    1498 log.go:172] (0xc0007249a0) (0xc00094a1e0) Stream removed, broadcasting: 1\nI0218 21:54:33.750819    1498 log.go:172] (0xc0007249a0) Go away received\nI0218 21:54:33.753175    1498 log.go:172] (0xc0007249a0) (0xc00094a1e0) Stream removed, broadcasting: 1\nI0218 21:54:33.753214    1498 log.go:172] (0xc0007249a0) (0xc00094a280) Stream removed, broadcasting: 3\nI0218 21:54:33.753236    1498 log.go:172] (0xc0007249a0) (0xc00094a320) Stream removed, broadcasting: 5\n"
Feb 18 21:54:33.775: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 18 21:54:33.775: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 18 21:54:33.787: INFO: Found 1 stateful pods, waiting for 3
Feb 18 21:54:43.797: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 21:54:43.798: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 21:54:43.798: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 18 21:54:53.796: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 21:54:53.796: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 21:54:53.796: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 18 21:54:53.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2552 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 18 21:54:54.248: INFO: stderr: "I0218 21:54:54.036873    1519 log.go:172] (0xc00059a6e0) (0xc000737400) Create stream\nI0218 21:54:54.037380    1519 log.go:172] (0xc00059a6e0) (0xc000737400) Stream added, broadcasting: 1\nI0218 21:54:54.043875    1519 log.go:172] (0xc00059a6e0) Reply frame received for 1\nI0218 21:54:54.044407    1519 log.go:172] (0xc00059a6e0) (0xc000546000) Create stream\nI0218 21:54:54.044488    1519 log.go:172] (0xc00059a6e0) (0xc000546000) Stream added, broadcasting: 3\nI0218 21:54:54.048017    1519 log.go:172] (0xc00059a6e0) Reply frame received for 3\nI0218 21:54:54.048046    1519 log.go:172] (0xc00059a6e0) (0xc0006899a0) Create stream\nI0218 21:54:54.048054    1519 log.go:172] (0xc00059a6e0) (0xc0006899a0) Stream added, broadcasting: 5\nI0218 21:54:54.050059    1519 log.go:172] (0xc00059a6e0) Reply frame received for 5\nI0218 21:54:54.140526    1519 log.go:172] (0xc00059a6e0) Data frame received for 3\nI0218 21:54:54.140663    1519 log.go:172] (0xc000546000) (3) Data frame handling\nI0218 21:54:54.140685    1519 log.go:172] (0xc000546000) (3) Data frame sent\nI0218 21:54:54.140721    1519 log.go:172] (0xc00059a6e0) Data frame received for 5\nI0218 21:54:54.140725    1519 log.go:172] (0xc0006899a0) (5) Data frame handling\nI0218 21:54:54.140734    1519 log.go:172] (0xc0006899a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0218 21:54:54.237881    1519 log.go:172] (0xc00059a6e0) Data frame received for 1\nI0218 21:54:54.238018    1519 log.go:172] (0xc00059a6e0) (0xc000546000) Stream removed, broadcasting: 3\nI0218 21:54:54.238114    1519 log.go:172] (0xc000737400) (1) Data frame handling\nI0218 21:54:54.238149    1519 log.go:172] (0xc000737400) (1) Data frame sent\nI0218 21:54:54.238161    1519 log.go:172] (0xc00059a6e0) (0xc000737400) Stream removed, broadcasting: 1\nI0218 21:54:54.238189    1519 log.go:172] (0xc00059a6e0) (0xc0006899a0) Stream removed, broadcasting: 5\nI0218 21:54:54.238211    1519 log.go:172] (0xc00059a6e0) Go away received\nI0218 21:54:54.239062    1519 log.go:172] (0xc00059a6e0) (0xc000737400) Stream removed, broadcasting: 1\nI0218 21:54:54.239080    1519 log.go:172] (0xc00059a6e0) (0xc000546000) Stream removed, broadcasting: 3\nI0218 21:54:54.239085    1519 log.go:172] (0xc00059a6e0) (0xc0006899a0) Stream removed, broadcasting: 5\n"
Feb 18 21:54:54.248: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 18 21:54:54.248: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 18 21:54:54.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2552 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 18 21:54:54.777: INFO: stderr: "I0218 21:54:54.404058    1538 log.go:172] (0xc0005f7130) (0xc000675ae0) Create stream\nI0218 21:54:54.404250    1538 log.go:172] (0xc0005f7130) (0xc000675ae0) Stream added, broadcasting: 1\nI0218 21:54:54.408114    1538 log.go:172] (0xc0005f7130) Reply frame received for 1\nI0218 21:54:54.408145    1538 log.go:172] (0xc0005f7130) (0xc0009c8000) Create stream\nI0218 21:54:54.408153    1538 log.go:172] (0xc0005f7130) (0xc0009c8000) Stream added, broadcasting: 3\nI0218 21:54:54.408890    1538 log.go:172] (0xc0005f7130) Reply frame received for 3\nI0218 21:54:54.408912    1538 log.go:172] (0xc0005f7130) (0xc00027a000) Create stream\nI0218 21:54:54.408918    1538 log.go:172] (0xc0005f7130) (0xc00027a000) Stream added, broadcasting: 5\nI0218 21:54:54.409752    1538 log.go:172] (0xc0005f7130) Reply frame received for 5\nI0218 21:54:54.518092    1538 log.go:172] (0xc0005f7130) Data frame received for 5\nI0218 21:54:54.518231    1538 log.go:172] (0xc00027a000) (5) Data frame handling\nI0218 21:54:54.518278    1538 log.go:172] (0xc00027a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0218 21:54:54.562026    1538 log.go:172] (0xc0005f7130) Data frame received for 3\nI0218 21:54:54.562237    1538 log.go:172] (0xc0009c8000) (3) Data frame handling\nI0218 21:54:54.562275    1538 log.go:172] (0xc0009c8000) (3) Data frame sent\nI0218 21:54:54.753042    1538 log.go:172] (0xc0005f7130) Data frame received for 1\nI0218 21:54:54.753199    1538 log.go:172] (0xc000675ae0) (1) Data frame handling\nI0218 21:54:54.753245    1538 log.go:172] (0xc000675ae0) (1) Data frame sent\nI0218 21:54:54.753304    1538 log.go:172] (0xc0005f7130) (0xc000675ae0) Stream removed, broadcasting: 1\nI0218 21:54:54.760956    1538 log.go:172] (0xc0005f7130) (0xc0009c8000) Stream removed, broadcasting: 3\nI0218 21:54:54.761343    1538 log.go:172] (0xc0005f7130) (0xc00027a000) Stream removed, broadcasting: 5\nI0218 21:54:54.761420    1538 log.go:172] (0xc0005f7130) Go away received\nI0218 21:54:54.761480    1538 log.go:172] (0xc0005f7130) (0xc000675ae0) Stream removed, broadcasting: 1\nI0218 21:54:54.761508    1538 log.go:172] (0xc0005f7130) (0xc0009c8000) Stream removed, broadcasting: 3\nI0218 21:54:54.761523    1538 log.go:172] (0xc0005f7130) (0xc00027a000) Stream removed, broadcasting: 5\n"
Feb 18 21:54:54.777: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 18 21:54:54.777: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 18 21:54:54.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2552 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 18 21:54:55.242: INFO: stderr: "I0218 21:54:54.963802    1558 log.go:172] (0xc000906e70) (0xc000ac19a0) Create stream\nI0218 21:54:54.963947    1558 log.go:172] (0xc000906e70) (0xc000ac19a0) Stream added, broadcasting: 1\nI0218 21:54:54.967212    1558 log.go:172] (0xc000906e70) Reply frame received for 1\nI0218 21:54:54.967247    1558 log.go:172] (0xc000906e70) (0xc0009dc1e0) Create stream\nI0218 21:54:54.967257    1558 log.go:172] (0xc000906e70) (0xc0009dc1e0) Stream added, broadcasting: 3\nI0218 21:54:54.968919    1558 log.go:172] (0xc000906e70) Reply frame received for 3\nI0218 21:54:54.969001    1558 log.go:172] (0xc000906e70) (0xc000ac1a40) Create stream\nI0218 21:54:54.969017    1558 log.go:172] (0xc000906e70) (0xc000ac1a40) Stream added, broadcasting: 5\nI0218 21:54:54.970956    1558 log.go:172] (0xc000906e70) Reply frame received for 5\nI0218 21:54:55.063508    1558 log.go:172] (0xc000906e70) Data frame received for 5\nI0218 21:54:55.064221    1558 log.go:172] (0xc000ac1a40) (5) Data frame handling\nI0218 21:54:55.064433    1558 log.go:172] (0xc000ac1a40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0218 21:54:55.107220    1558 log.go:172] (0xc000906e70) Data frame received for 3\nI0218 21:54:55.107266    1558 log.go:172] (0xc0009dc1e0) (3) Data frame handling\nI0218 21:54:55.107284    1558 log.go:172] (0xc0009dc1e0) (3) Data frame sent\nI0218 21:54:55.226703    1558 log.go:172] (0xc000906e70) (0xc000ac1a40) Stream removed, broadcasting: 5\nI0218 21:54:55.226832    1558 log.go:172] (0xc000906e70) Data frame received for 1\nI0218 21:54:55.226860    1558 log.go:172] (0xc000906e70) (0xc0009dc1e0) Stream removed, broadcasting: 3\nI0218 21:54:55.226896    1558 log.go:172] (0xc000ac19a0) (1) Data frame handling\nI0218 21:54:55.226919    1558 log.go:172] (0xc000ac19a0) (1) Data frame sent\nI0218 21:54:55.226937    1558 log.go:172] (0xc000906e70) (0xc000ac19a0) Stream removed, broadcasting: 1\nI0218 21:54:55.226967    1558 log.go:172] (0xc000906e70) Go away received\nI0218 21:54:55.227805    1558 log.go:172] (0xc000906e70) (0xc000ac19a0) Stream removed, broadcasting: 1\nI0218 21:54:55.227818    1558 log.go:172] (0xc000906e70) (0xc0009dc1e0) Stream removed, broadcasting: 3\nI0218 21:54:55.227826    1558 log.go:172] (0xc000906e70) (0xc000ac1a40) Stream removed, broadcasting: 5\n"
Feb 18 21:54:55.242: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 18 21:54:55.242: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 18 21:54:55.242: INFO: Waiting for statefulset status.replicas updated to 0
Feb 18 21:54:55.248: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 18 21:55:05.259: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 18 21:55:05.259: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 18 21:55:05.259: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 18 21:55:05.284: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999116s
Feb 18 21:55:06.297: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988171788s
Feb 18 21:55:07.311: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.975028163s
Feb 18 21:55:08.324: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.96120158s
Feb 18 21:55:09.333: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.948405446s
Feb 18 21:55:10.346: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.939410361s
Feb 18 21:55:11.356: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.926595219s
Feb 18 21:55:12.367: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.916354021s
Feb 18 21:55:13.375: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.905660913s
Feb 18 21:55:14.555: INFO: Verifying statefulset ss doesn't scale past 3 for another 897.280504ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2552
Feb 18 21:55:15.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2552 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 21:55:15.942: INFO: stderr: "I0218 21:55:15.756044    1577 log.go:172] (0xc000bf42c0) (0xc000a38140) Create stream\nI0218 21:55:15.756145    1577 log.go:172] (0xc000bf42c0) (0xc000a38140) Stream added, broadcasting: 1\nI0218 21:55:15.759184    1577 log.go:172] (0xc000bf42c0) Reply frame received for 1\nI0218 21:55:15.759223    1577 log.go:172] (0xc000bf42c0) (0xc0009e4000) Create stream\nI0218 21:55:15.759231    1577 log.go:172] (0xc000bf42c0) (0xc0009e4000) Stream added, broadcasting: 3\nI0218 21:55:15.760778    1577 log.go:172] (0xc000bf42c0) Reply frame received for 3\nI0218 21:55:15.760856    1577 log.go:172] (0xc000bf42c0) (0xc0009e5040) Create stream\nI0218 21:55:15.760869    1577 log.go:172] (0xc000bf42c0) (0xc0009e5040) Stream added, broadcasting: 5\nI0218 21:55:15.763229    1577 log.go:172] (0xc000bf42c0) Reply frame received for 5\nI0218 21:55:15.844950    1577 log.go:172] (0xc000bf42c0) Data frame received for 5\nI0218 21:55:15.845040    1577 log.go:172] (0xc0009e5040) (5) Data frame handling\nI0218 21:55:15.845066    1577 log.go:172] (0xc0009e5040) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0218 21:55:15.854081    1577 log.go:172] (0xc000bf42c0) Data frame received for 3\nI0218 21:55:15.858192    1577 log.go:172] (0xc0009e4000) (3) Data frame handling\nI0218 21:55:15.858341    1577 log.go:172] (0xc0009e4000) (3) Data frame sent\nI0218 21:55:15.932563    1577 log.go:172] (0xc000bf42c0) (0xc0009e5040) Stream removed, broadcasting: 5\nI0218 21:55:15.933003    1577 log.go:172] (0xc000bf42c0) Data frame received for 1\nI0218 21:55:15.933146    1577 log.go:172] (0xc000bf42c0) (0xc0009e4000) Stream removed, broadcasting: 3\nI0218 21:55:15.933335    1577 log.go:172] (0xc000a38140) (1) Data frame handling\nI0218 21:55:15.933404    1577 log.go:172] (0xc000a38140) (1) Data frame sent\nI0218 21:55:15.933455    1577 log.go:172] (0xc000bf42c0) (0xc000a38140) Stream removed, broadcasting: 1\nI0218 21:55:15.933481    1577 log.go:172] (0xc000bf42c0) Go away received\nI0218 21:55:15.934687    1577 log.go:172] (0xc000bf42c0) (0xc000a38140) Stream removed, broadcasting: 1\nI0218 21:55:15.934763    1577 log.go:172] (0xc000bf42c0) (0xc0009e4000) Stream removed, broadcasting: 3\nI0218 21:55:15.934782    1577 log.go:172] (0xc000bf42c0) (0xc0009e5040) Stream removed, broadcasting: 5\n"
Feb 18 21:55:15.942: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 18 21:55:15.942: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 18 21:55:15.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2552 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 21:55:16.337: INFO: stderr: "I0218 21:55:16.165143    1597 log.go:172] (0xc0007bea50) (0xc000631b80) Create stream\nI0218 21:55:16.165326    1597 log.go:172] (0xc0007bea50) (0xc000631b80) Stream added, broadcasting: 1\nI0218 21:55:16.169194    1597 log.go:172] (0xc0007bea50) Reply frame received for 1\nI0218 21:55:16.169233    1597 log.go:172] (0xc0007bea50) (0xc0005a4000) Create stream\nI0218 21:55:16.169243    1597 log.go:172] (0xc0007bea50) (0xc0005a4000) Stream added, broadcasting: 3\nI0218 21:55:16.170822    1597 log.go:172] (0xc0007bea50) Reply frame received for 3\nI0218 21:55:16.170847    1597 log.go:172] (0xc0007bea50) (0xc000631d60) Create stream\nI0218 21:55:16.170854    1597 log.go:172] (0xc0007bea50) (0xc000631d60) Stream added, broadcasting: 5\nI0218 21:55:16.172634    1597 log.go:172] (0xc0007bea50) Reply frame received for 5\nI0218 21:55:16.244651    1597 log.go:172] (0xc0007bea50) Data frame received for 5\nI0218 21:55:16.244700    1597 log.go:172] (0xc000631d60) (5) Data frame handling\nI0218 21:55:16.244770    1597 log.go:172] (0xc000631d60) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0218 21:55:16.245513    1597 log.go:172] (0xc0007bea50) Data frame received for 3\nI0218 21:55:16.245546    1597 log.go:172] (0xc0005a4000) (3) Data frame handling\nI0218 21:55:16.245581    1597 log.go:172] (0xc0005a4000) (3) Data frame sent\nI0218 21:55:16.326404    1597 log.go:172] (0xc0007bea50) Data frame received for 1\nI0218 21:55:16.326678    1597 log.go:172] (0xc0007bea50) (0xc0005a4000) Stream removed, broadcasting: 3\nI0218 21:55:16.326764    1597 log.go:172] (0xc000631b80) (1) Data frame handling\nI0218 21:55:16.326805    1597 log.go:172] (0xc000631b80) (1) Data frame sent\nI0218 21:55:16.326866    1597 log.go:172] (0xc0007bea50) (0xc000631d60) Stream removed, broadcasting: 5\nI0218 21:55:16.326906    1597 log.go:172] (0xc0007bea50) (0xc000631b80) Stream removed, broadcasting: 1\nI0218 21:55:16.326922    1597 log.go:172] (0xc0007bea50) Go away received\nI0218 21:55:16.327867    1597 log.go:172] (0xc0007bea50) (0xc000631b80) Stream removed, broadcasting: 1\nI0218 21:55:16.327885    1597 log.go:172] (0xc0007bea50) (0xc0005a4000) Stream removed, broadcasting: 3\nI0218 21:55:16.327893    1597 log.go:172] (0xc0007bea50) (0xc000631d60) Stream removed, broadcasting: 5\n"
Feb 18 21:55:16.337: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 18 21:55:16.337: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 18 21:55:16.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2552 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 21:55:16.785: INFO: stderr: "I0218 21:55:16.515523    1618 log.go:172] (0xc000a0c000) (0xc000a5a000) Create stream\nI0218 21:55:16.516453    1618 log.go:172] (0xc000a0c000) (0xc000a5a000) Stream added, broadcasting: 1\nI0218 21:55:16.522637    1618 log.go:172] (0xc000a0c000) Reply frame received for 1\nI0218 21:55:16.522842    1618 log.go:172] (0xc000a0c000) (0xc0009c4000) Create stream\nI0218 21:55:16.522915    1618 log.go:172] (0xc000a0c000) (0xc0009c4000) Stream added, broadcasting: 3\nI0218 21:55:16.524528    1618 log.go:172] (0xc000a0c000) Reply frame received for 3\nI0218 21:55:16.524569    1618 log.go:172] (0xc000a0c000) (0xc0009de000) Create stream\nI0218 21:55:16.524595    1618 log.go:172] (0xc000a0c000) (0xc0009de000) Stream added, broadcasting: 5\nI0218 21:55:16.527513    1618 log.go:172] (0xc000a0c000) Reply frame received for 5\nI0218 21:55:16.643628    1618 log.go:172] (0xc000a0c000) Data frame received for 3\nI0218 21:55:16.643801    1618 log.go:172] (0xc0009c4000) (3) Data frame handling\nI0218 21:55:16.643849    1618 log.go:172] (0xc0009c4000) (3) Data frame sent\nI0218 21:55:16.644726    1618 log.go:172] (0xc000a0c000) Data frame received for 5\nI0218 21:55:16.644758    1618 log.go:172] (0xc0009de000) (5) Data frame handling\nI0218 21:55:16.644782    1618 log.go:172] (0xc0009de000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0218 21:55:16.761142    1618 log.go:172] (0xc000a0c000) Data frame received for 1\nI0218 21:55:16.761361    1618 log.go:172] (0xc000a0c000) (0xc0009de000) Stream removed, broadcasting: 5\nI0218 21:55:16.761485    1618 log.go:172] (0xc000a5a000) (1) Data frame handling\nI0218 21:55:16.761531    1618 log.go:172] (0xc000a0c000) (0xc0009c4000) Stream removed, broadcasting: 3\nI0218 21:55:16.761628    1618 log.go:172] (0xc000a5a000) (1) Data frame sent\nI0218 21:55:16.761651    1618 log.go:172] (0xc000a0c000) (0xc000a5a000) Stream removed, broadcasting: 1\nI0218 21:55:16.761679    1618 log.go:172] (0xc000a0c000) Go away received\nI0218 21:55:16.767414    1618 log.go:172] (0xc000a0c000) (0xc000a5a000) Stream removed, broadcasting: 1\nI0218 21:55:16.767464    1618 log.go:172] (0xc000a0c000) (0xc0009c4000) Stream removed, broadcasting: 3\nI0218 21:55:16.767473    1618 log.go:172] (0xc000a0c000) (0xc0009de000) Stream removed, broadcasting: 5\n"
Feb 18 21:55:16.785: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 18 21:55:16.785: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 18 21:55:16.785: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb 18 21:55:56.809: INFO: Deleting all statefulset in ns statefulset-2552
Feb 18 21:55:56.816: INFO: Scaling statefulset ss to 0
Feb 18 21:55:56.833: INFO: Waiting for statefulset status.replicas updated to 0
Feb 18 21:55:56.836: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:55:56.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2552" for this suite.

• [SLOW TEST:104.467 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":103,"skipped":1646,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:55:56.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 18 21:55:57.162: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-8479 /api/v1/namespaces/watch-8479/configmaps/e2e-watch-test-resource-version b598be92-e232-4079-8079-b2f1ff24a495 9270554 0 2020-02-18 21:55:57 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 18 21:55:57.162: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-8479 /api/v1/namespaces/watch-8479/configmaps/e2e-watch-test-resource-version b598be92-e232-4079-8079-b2f1ff24a495 9270555 0 2020-02-18 21:55:57 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:55:57.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8479" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":104,"skipped":1657,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:55:57.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 18 21:55:57.358: INFO: Waiting up to 5m0s for pod "downwardapi-volume-035f53ea-7b44-4e4a-8b76-e8c61e6a41f8" in namespace "projected-2203" to be "success or failure"
Feb 18 21:55:57.373: INFO: Pod "downwardapi-volume-035f53ea-7b44-4e4a-8b76-e8c61e6a41f8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.312615ms
Feb 18 21:55:59.434: INFO: Pod "downwardapi-volume-035f53ea-7b44-4e4a-8b76-e8c61e6a41f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076049372s
Feb 18 21:56:01.441: INFO: Pod "downwardapi-volume-035f53ea-7b44-4e4a-8b76-e8c61e6a41f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082819614s
Feb 18 21:56:03.455: INFO: Pod "downwardapi-volume-035f53ea-7b44-4e4a-8b76-e8c61e6a41f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097083297s
Feb 18 21:56:05.462: INFO: Pod "downwardapi-volume-035f53ea-7b44-4e4a-8b76-e8c61e6a41f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10335582s
Feb 18 21:56:07.470: INFO: Pod "downwardapi-volume-035f53ea-7b44-4e4a-8b76-e8c61e6a41f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.111642759s
STEP: Saw pod success
Feb 18 21:56:07.470: INFO: Pod "downwardapi-volume-035f53ea-7b44-4e4a-8b76-e8c61e6a41f8" satisfied condition "success or failure"
Feb 18 21:56:07.475: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-035f53ea-7b44-4e4a-8b76-e8c61e6a41f8 container client-container: 
STEP: delete the pod
Feb 18 21:56:07.548: INFO: Waiting for pod downwardapi-volume-035f53ea-7b44-4e4a-8b76-e8c61e6a41f8 to disappear
Feb 18 21:56:07.566: INFO: Pod downwardapi-volume-035f53ea-7b44-4e4a-8b76-e8c61e6a41f8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:56:07.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2203" for this suite.

• [SLOW TEST:10.449 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1666,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:56:07.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-0d90f16d-6e15-4aec-bf53-a601343195dd
STEP: Creating a pod to test consume secrets
Feb 18 21:56:07.775: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bc930086-aa85-4238-82f3-fdc446424f20" in namespace "projected-4383" to be "success or failure"
Feb 18 21:56:07.779: INFO: Pod "pod-projected-secrets-bc930086-aa85-4238-82f3-fdc446424f20": Phase="Pending", Reason="", readiness=false. Elapsed: 3.394459ms
Feb 18 21:56:09.789: INFO: Pod "pod-projected-secrets-bc930086-aa85-4238-82f3-fdc446424f20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013752806s
Feb 18 21:56:11.803: INFO: Pod "pod-projected-secrets-bc930086-aa85-4238-82f3-fdc446424f20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02798642s
Feb 18 21:56:13.811: INFO: Pod "pod-projected-secrets-bc930086-aa85-4238-82f3-fdc446424f20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035425332s
Feb 18 21:56:15.816: INFO: Pod "pod-projected-secrets-bc930086-aa85-4238-82f3-fdc446424f20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040758466s
STEP: Saw pod success
Feb 18 21:56:15.816: INFO: Pod "pod-projected-secrets-bc930086-aa85-4238-82f3-fdc446424f20" satisfied condition "success or failure"
Feb 18 21:56:15.819: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-bc930086-aa85-4238-82f3-fdc446424f20 container secret-volume-test: 
STEP: delete the pod
Feb 18 21:56:15.891: INFO: Waiting for pod pod-projected-secrets-bc930086-aa85-4238-82f3-fdc446424f20 to disappear
Feb 18 21:56:15.900: INFO: Pod pod-projected-secrets-bc930086-aa85-4238-82f3-fdc446424f20 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:56:15.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4383" for this suite.

• [SLOW TEST:8.498 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1675,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:56:16.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 18 21:56:16.333: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4448 /api/v1/namespaces/watch-4448/configmaps/e2e-watch-test-configmap-a 5c505b6c-58c5-4dd7-8fc7-f64dc814ce57 9270677 0 2020-02-18 21:56:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 18 21:56:16.334: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4448 /api/v1/namespaces/watch-4448/configmaps/e2e-watch-test-configmap-a 5c505b6c-58c5-4dd7-8fc7-f64dc814ce57 9270677 0 2020-02-18 21:56:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 18 21:56:26.348: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4448 /api/v1/namespaces/watch-4448/configmaps/e2e-watch-test-configmap-a 5c505b6c-58c5-4dd7-8fc7-f64dc814ce57 9270711 0 2020-02-18 21:56:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 18 21:56:26.349: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4448 /api/v1/namespaces/watch-4448/configmaps/e2e-watch-test-configmap-a 5c505b6c-58c5-4dd7-8fc7-f64dc814ce57 9270711 0 2020-02-18 21:56:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 18 21:56:36.363: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4448 /api/v1/namespaces/watch-4448/configmaps/e2e-watch-test-configmap-a 5c505b6c-58c5-4dd7-8fc7-f64dc814ce57 9270735 0 2020-02-18 21:56:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 18 21:56:36.363: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4448 /api/v1/namespaces/watch-4448/configmaps/e2e-watch-test-configmap-a 5c505b6c-58c5-4dd7-8fc7-f64dc814ce57 9270735 0 2020-02-18 21:56:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 18 21:56:46.477: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4448 /api/v1/namespaces/watch-4448/configmaps/e2e-watch-test-configmap-a 5c505b6c-58c5-4dd7-8fc7-f64dc814ce57 9270759 0 2020-02-18 21:56:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 18 21:56:46.478: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4448 /api/v1/namespaces/watch-4448/configmaps/e2e-watch-test-configmap-a 5c505b6c-58c5-4dd7-8fc7-f64dc814ce57 9270759 0 2020-02-18 21:56:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 18 21:56:56.493: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4448 /api/v1/namespaces/watch-4448/configmaps/e2e-watch-test-configmap-b fdd2bd8b-94be-47d3-9080-829549777694 9270781 0 2020-02-18 21:56:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 18 21:56:56.493: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4448 /api/v1/namespaces/watch-4448/configmaps/e2e-watch-test-configmap-b fdd2bd8b-94be-47d3-9080-829549777694 9270781 0 2020-02-18 21:56:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 18 21:57:06.518: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4448 /api/v1/namespaces/watch-4448/configmaps/e2e-watch-test-configmap-b fdd2bd8b-94be-47d3-9080-829549777694 9270807 0 2020-02-18 21:56:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 18 21:57:06.518: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4448 /api/v1/namespaces/watch-4448/configmaps/e2e-watch-test-configmap-b fdd2bd8b-94be-47d3-9080-829549777694 9270807 0 2020-02-18 21:56:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:57:16.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4448" for this suite.

• [SLOW TEST:60.454 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":107,"skipped":1689,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:57:16.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0218 21:57:58.514310       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 18 21:57:58.514: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:57:58.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4298" for this suite.

• [SLOW TEST:41.953 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":108,"skipped":1695,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:57:58.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:58:23.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9446" for this suite.

• [SLOW TEST:24.534 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":109,"skipped":1707,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:58:23.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4624.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4624.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4624.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4624.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4624.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4624.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 18 21:58:37.383: INFO: DNS probes using dns-4624/dns-test-43f7243d-9872-4c5b-99e6-eddc4321e8df succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:58:37.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4624" for this suite.

• [SLOW TEST:14.508 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":110,"skipped":1710,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:58:37.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 21:58:38.676: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 18 21:58:40.741: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:58:42.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:58:44.749: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:58:46.761: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 21:58:48.747: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717659918, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 21:58:51.789: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:58:52.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4173" for this suite.
STEP: Destroying namespace "webhook-4173-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:15.157 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":111,"skipped":1737,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:58:52.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 18 21:58:52.860: INFO: Number of nodes with available pods: 0
Feb 18 21:58:52.860: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:58:54.338: INFO: Number of nodes with available pods: 0
Feb 18 21:58:54.338: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:58:54.947: INFO: Number of nodes with available pods: 0
Feb 18 21:58:54.947: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:58:55.876: INFO: Number of nodes with available pods: 0
Feb 18 21:58:55.876: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:58:56.874: INFO: Number of nodes with available pods: 0
Feb 18 21:58:56.874: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:58:59.019: INFO: Number of nodes with available pods: 0
Feb 18 21:58:59.019: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:00.549: INFO: Number of nodes with available pods: 0
Feb 18 21:59:00.549: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:01.136: INFO: Number of nodes with available pods: 0
Feb 18 21:59:01.136: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:01.882: INFO: Number of nodes with available pods: 0
Feb 18 21:59:01.882: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:02.874: INFO: Number of nodes with available pods: 0
Feb 18 21:59:02.874: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:03.897: INFO: Number of nodes with available pods: 1
Feb 18 21:59:03.897: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:04.883: INFO: Number of nodes with available pods: 1
Feb 18 21:59:04.883: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:05.873: INFO: Number of nodes with available pods: 2
Feb 18 21:59:05.873: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 18 21:59:05.937: INFO: Number of nodes with available pods: 1
Feb 18 21:59:05.937: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:06.954: INFO: Number of nodes with available pods: 1
Feb 18 21:59:06.954: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:07.952: INFO: Number of nodes with available pods: 1
Feb 18 21:59:07.952: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:08.956: INFO: Number of nodes with available pods: 1
Feb 18 21:59:08.956: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:09.952: INFO: Number of nodes with available pods: 1
Feb 18 21:59:09.952: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:10.947: INFO: Number of nodes with available pods: 1
Feb 18 21:59:10.947: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:11.950: INFO: Number of nodes with available pods: 1
Feb 18 21:59:11.950: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:12.995: INFO: Number of nodes with available pods: 1
Feb 18 21:59:12.995: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:13.961: INFO: Number of nodes with available pods: 1
Feb 18 21:59:13.961: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:14.950: INFO: Number of nodes with available pods: 1
Feb 18 21:59:14.950: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:15.950: INFO: Number of nodes with available pods: 1
Feb 18 21:59:15.950: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:17.022: INFO: Number of nodes with available pods: 1
Feb 18 21:59:17.022: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:17.957: INFO: Number of nodes with available pods: 1
Feb 18 21:59:17.957: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:18.951: INFO: Number of nodes with available pods: 1
Feb 18 21:59:18.951: INFO: Node jerma-node is running more than one daemon pod
Feb 18 21:59:19.949: INFO: Number of nodes with available pods: 2
Feb 18 21:59:19.949: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5016, will wait for the garbage collector to delete the pods
Feb 18 21:59:20.017: INFO: Deleting DaemonSet.extensions daemon-set took: 11.710462ms
Feb 18 21:59:20.419: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.371283ms
Feb 18 21:59:33.125: INFO: Number of nodes with available pods: 0
Feb 18 21:59:33.125: INFO: Number of running nodes: 0, number of available pods: 0
Feb 18 21:59:33.130: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5016/daemonsets","resourceVersion":"9271479"},"items":null}

Feb 18 21:59:33.133: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5016/pods","resourceVersion":"9271479"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:59:33.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5016" for this suite.

• [SLOW TEST:40.440 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":112,"skipped":1748,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:59:33.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 21:59:33.303: INFO: Creating ReplicaSet my-hostname-basic-a9218921-bed0-47f7-834a-5489a7c4232c
Feb 18 21:59:33.319: INFO: Pod name my-hostname-basic-a9218921-bed0-47f7-834a-5489a7c4232c: Found 0 pods out of 1
Feb 18 21:59:38.327: INFO: Pod name my-hostname-basic-a9218921-bed0-47f7-834a-5489a7c4232c: Found 1 pods out of 1
Feb 18 21:59:38.327: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-a9218921-bed0-47f7-834a-5489a7c4232c" is running
Feb 18 21:59:42.338: INFO: Pod "my-hostname-basic-a9218921-bed0-47f7-834a-5489a7c4232c-cd9s7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-18 21:59:33 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-18 21:59:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a9218921-bed0-47f7-834a-5489a7c4232c]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-18 21:59:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a9218921-bed0-47f7-834a-5489a7c4232c]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-18 21:59:33 +0000 UTC Reason: Message:}])
Feb 18 21:59:42.338: INFO: Trying to dial the pod
Feb 18 21:59:47.362: INFO: Controller my-hostname-basic-a9218921-bed0-47f7-834a-5489a7c4232c: Got expected result from replica 1 [my-hostname-basic-a9218921-bed0-47f7-834a-5489a7c4232c-cd9s7]: "my-hostname-basic-a9218921-bed0-47f7-834a-5489a7c4232c-cd9s7", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:59:47.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1190" for this suite.

• [SLOW TEST:14.201 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":113,"skipped":1752,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:59:47.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb 18 21:59:47.491: INFO: Waiting up to 5m0s for pod "downward-api-ba2ebb83-1e96-4e3a-8970-8f7aa9c34cc9" in namespace "downward-api-459" to be "success or failure"
Feb 18 21:59:47.529: INFO: Pod "downward-api-ba2ebb83-1e96-4e3a-8970-8f7aa9c34cc9": Phase="Pending", Reason="", readiness=false. Elapsed: 37.691055ms
Feb 18 21:59:49.577: INFO: Pod "downward-api-ba2ebb83-1e96-4e3a-8970-8f7aa9c34cc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085968982s
Feb 18 21:59:51.585: INFO: Pod "downward-api-ba2ebb83-1e96-4e3a-8970-8f7aa9c34cc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094101171s
Feb 18 21:59:53.595: INFO: Pod "downward-api-ba2ebb83-1e96-4e3a-8970-8f7aa9c34cc9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10461829s
Feb 18 21:59:55.602: INFO: Pod "downward-api-ba2ebb83-1e96-4e3a-8970-8f7aa9c34cc9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110707888s
Feb 18 21:59:57.610: INFO: Pod "downward-api-ba2ebb83-1e96-4e3a-8970-8f7aa9c34cc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118692605s
STEP: Saw pod success
Feb 18 21:59:57.610: INFO: Pod "downward-api-ba2ebb83-1e96-4e3a-8970-8f7aa9c34cc9" satisfied condition "success or failure"
Feb 18 21:59:57.642: INFO: Trying to get logs from node jerma-node pod downward-api-ba2ebb83-1e96-4e3a-8970-8f7aa9c34cc9 container dapi-container: 
STEP: delete the pod
Feb 18 21:59:57.734: INFO: Waiting for pod downward-api-ba2ebb83-1e96-4e3a-8970-8f7aa9c34cc9 to disappear
Feb 18 21:59:57.835: INFO: Pod downward-api-ba2ebb83-1e96-4e3a-8970-8f7aa9c34cc9 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 21:59:57.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-459" for this suite.

• [SLOW TEST:10.474 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1753,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 21:59:57.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-6431
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6431 to expose endpoints map[]
Feb 18 21:59:58.124: INFO: Get endpoints failed (74.907363ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb 18 21:59:59.135: INFO: successfully validated that service multi-endpoint-test in namespace services-6431 exposes endpoints map[] (1.085489322s elapsed)
STEP: Creating pod pod1 in namespace services-6431
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6431 to expose endpoints map[pod1:[100]]
Feb 18 22:00:03.300: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.153445707s elapsed, will retry)
Feb 18 22:00:05.315: INFO: successfully validated that service multi-endpoint-test in namespace services-6431 exposes endpoints map[pod1:[100]] (6.168845294s elapsed)
STEP: Creating pod pod2 in namespace services-6431
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6431 to expose endpoints map[pod1:[100] pod2:[101]]
Feb 18 22:00:10.498: INFO: Unexpected endpoints: found map[9df9ddce-9dd4-43e1-b65d-22fb51c11887:[100]], expected map[pod1:[100] pod2:[101]] (5.179157581s elapsed, will retry)
Feb 18 22:00:11.519: INFO: successfully validated that service multi-endpoint-test in namespace services-6431 exposes endpoints map[pod1:[100] pod2:[101]] (6.199602374s elapsed)
STEP: Deleting pod pod1 in namespace services-6431
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6431 to expose endpoints map[pod2:[101]]
Feb 18 22:00:11.606: INFO: successfully validated that service multi-endpoint-test in namespace services-6431 exposes endpoints map[pod2:[101]] (73.127088ms elapsed)
STEP: Deleting pod pod2 in namespace services-6431
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6431 to expose endpoints map[]
Feb 18 22:00:12.728: INFO: successfully validated that service multi-endpoint-test in namespace services-6431 exposes endpoints map[] (1.051907145s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:00:12.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6431" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:14.996 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":115,"skipped":1789,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:00:12.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 18 22:00:13.004: INFO: Waiting up to 5m0s for pod "pod-7f344a53-9046-44c7-9de0-920181f6deb7" in namespace "emptydir-2173" to be "success or failure"
Feb 18 22:00:13.012: INFO: Pod "pod-7f344a53-9046-44c7-9de0-920181f6deb7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.585334ms
Feb 18 22:00:15.064: INFO: Pod "pod-7f344a53-9046-44c7-9de0-920181f6deb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059374483s
Feb 18 22:00:17.165: INFO: Pod "pod-7f344a53-9046-44c7-9de0-920181f6deb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160783845s
Feb 18 22:00:19.169: INFO: Pod "pod-7f344a53-9046-44c7-9de0-920181f6deb7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164767516s
Feb 18 22:00:21.180: INFO: Pod "pod-7f344a53-9046-44c7-9de0-920181f6deb7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.175039579s
Feb 18 22:00:23.200: INFO: Pod "pod-7f344a53-9046-44c7-9de0-920181f6deb7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.194977963s
Feb 18 22:00:25.211: INFO: Pod "pod-7f344a53-9046-44c7-9de0-920181f6deb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.206053528s
STEP: Saw pod success
Feb 18 22:00:25.211: INFO: Pod "pod-7f344a53-9046-44c7-9de0-920181f6deb7" satisfied condition "success or failure"
Feb 18 22:00:25.214: INFO: Trying to get logs from node jerma-node pod pod-7f344a53-9046-44c7-9de0-920181f6deb7 container test-container: 
STEP: delete the pod
Feb 18 22:00:25.599: INFO: Waiting for pod pod-7f344a53-9046-44c7-9de0-920181f6deb7 to disappear
Feb 18 22:00:25.620: INFO: Pod pod-7f344a53-9046-44c7-9de0-920181f6deb7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:00:25.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2173" for this suite.

• [SLOW TEST:12.785 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1805,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:00:25.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb 18 22:00:35.831: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-829 PodName:pod-sharedvolume-46182775-0a72-4134-a3dc-4ffd7bc519b3 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 22:00:35.831: INFO: >>> kubeConfig: /root/.kube/config
I0218 22:00:35.897559       8 log.go:172] (0xc002b50840) (0xc000b14d20) Create stream
I0218 22:00:35.897635       8 log.go:172] (0xc002b50840) (0xc000b14d20) Stream added, broadcasting: 1
I0218 22:00:35.903054       8 log.go:172] (0xc002b50840) Reply frame received for 1
I0218 22:00:35.903203       8 log.go:172] (0xc002b50840) (0xc0005c61e0) Create stream
I0218 22:00:35.903221       8 log.go:172] (0xc002b50840) (0xc0005c61e0) Stream added, broadcasting: 3
I0218 22:00:35.906272       8 log.go:172] (0xc002b50840) Reply frame received for 3
I0218 22:00:35.906321       8 log.go:172] (0xc002b50840) (0xc0005c6820) Create stream
I0218 22:00:35.906337       8 log.go:172] (0xc002b50840) (0xc0005c6820) Stream added, broadcasting: 5
I0218 22:00:35.909455       8 log.go:172] (0xc002b50840) Reply frame received for 5
I0218 22:00:36.015573       8 log.go:172] (0xc002b50840) Data frame received for 3
I0218 22:00:36.015980       8 log.go:172] (0xc0005c61e0) (3) Data frame handling
I0218 22:00:36.016107       8 log.go:172] (0xc0005c61e0) (3) Data frame sent
I0218 22:00:36.143180       8 log.go:172] (0xc002b50840) Data frame received for 1
I0218 22:00:36.143297       8 log.go:172] (0xc002b50840) (0xc0005c6820) Stream removed, broadcasting: 5
I0218 22:00:36.143479       8 log.go:172] (0xc000b14d20) (1) Data frame handling
I0218 22:00:36.143525       8 log.go:172] (0xc000b14d20) (1) Data frame sent
I0218 22:00:36.143591       8 log.go:172] (0xc002b50840) (0xc0005c61e0) Stream removed, broadcasting: 3
I0218 22:00:36.143676       8 log.go:172] (0xc002b50840) (0xc000b14d20) Stream removed, broadcasting: 1
I0218 22:00:36.143711       8 log.go:172] (0xc002b50840) Go away received
I0218 22:00:36.145245       8 log.go:172] (0xc002b50840) (0xc000b14d20) Stream removed, broadcasting: 1
I0218 22:00:36.145270       8 log.go:172] (0xc002b50840) (0xc0005c61e0) Stream removed, broadcasting: 3
I0218 22:00:36.145286       8 log.go:172] (0xc002b50840) (0xc0005c6820) Stream removed, broadcasting: 5
Feb 18 22:00:36.145: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:00:36.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-829" for this suite.

• [SLOW TEST:10.535 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":117,"skipped":1819,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:00:36.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Feb 18 22:00:36.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 18 22:00:38.445: INFO: stderr: ""
Feb 18 22:00:38.445: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:00:38.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3825" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":118,"skipped":1826,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:00:38.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 18 22:00:38.652: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 18 22:00:43.662: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:00:43.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7827" for this suite.

• [SLOW TEST:5.422 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":119,"skipped":1832,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:00:43.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-4983af22-b9c2-4e11-b7b6-48aa62341e2b
STEP: Creating secret with name s-test-opt-upd-c8b93d35-cc17-41aa-809e-d7f09950ed0c
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-4983af22-b9c2-4e11-b7b6-48aa62341e2b
STEP: Updating secret s-test-opt-upd-c8b93d35-cc17-41aa-809e-d7f09950ed0c
STEP: Creating secret with name s-test-opt-create-6bd6dceb-009e-495c-8b62-c37bd94b3847
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:01:04.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3502" for this suite.

• [SLOW TEST:20.779 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1908,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:01:04.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-89d13ded-16e6-403d-bf7f-60020e7d6a02
STEP: Creating a pod to test consume configMaps
Feb 18 22:01:04.764: INFO: Waiting up to 5m0s for pod "pod-configmaps-fd466041-5444-4232-8e0b-bfa361e33c9a" in namespace "configmap-6651" to be "success or failure"
Feb 18 22:01:04.773: INFO: Pod "pod-configmaps-fd466041-5444-4232-8e0b-bfa361e33c9a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.096259ms
Feb 18 22:01:06.787: INFO: Pod "pod-configmaps-fd466041-5444-4232-8e0b-bfa361e33c9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022659959s
Feb 18 22:01:08.792: INFO: Pod "pod-configmaps-fd466041-5444-4232-8e0b-bfa361e33c9a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028204159s
Feb 18 22:01:10.861: INFO: Pod "pod-configmaps-fd466041-5444-4232-8e0b-bfa361e33c9a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097360125s
Feb 18 22:01:12.873: INFO: Pod "pod-configmaps-fd466041-5444-4232-8e0b-bfa361e33c9a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109364751s
Feb 18 22:01:14.918: INFO: Pod "pod-configmaps-fd466041-5444-4232-8e0b-bfa361e33c9a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.15422121s
Feb 18 22:01:16.924: INFO: Pod "pod-configmaps-fd466041-5444-4232-8e0b-bfa361e33c9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.160042163s
STEP: Saw pod success
Feb 18 22:01:16.924: INFO: Pod "pod-configmaps-fd466041-5444-4232-8e0b-bfa361e33c9a" satisfied condition "success or failure"
Feb 18 22:01:16.926: INFO: Trying to get logs from node jerma-node pod pod-configmaps-fd466041-5444-4232-8e0b-bfa361e33c9a container configmap-volume-test: 
STEP: delete the pod
Feb 18 22:01:16.987: INFO: Waiting for pod pod-configmaps-fd466041-5444-4232-8e0b-bfa361e33c9a to disappear
Feb 18 22:01:16.990: INFO: Pod pod-configmaps-fd466041-5444-4232-8e0b-bfa361e33c9a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:01:16.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6651" for this suite.

• [SLOW TEST:12.333 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1908,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:01:16.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 22:01:17.090: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-2fc160b9-3742-4d6f-ba88-808a5f3315b3" in namespace "security-context-test-5104" to be "success or failure"
Feb 18 22:01:17.093: INFO: Pod "alpine-nnp-false-2fc160b9-3742-4d6f-ba88-808a5f3315b3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.558592ms
Feb 18 22:01:19.102: INFO: Pod "alpine-nnp-false-2fc160b9-3742-4d6f-ba88-808a5f3315b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012339736s
Feb 18 22:01:21.109: INFO: Pod "alpine-nnp-false-2fc160b9-3742-4d6f-ba88-808a5f3315b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019521262s
Feb 18 22:01:23.117: INFO: Pod "alpine-nnp-false-2fc160b9-3742-4d6f-ba88-808a5f3315b3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027064136s
Feb 18 22:01:25.124: INFO: Pod "alpine-nnp-false-2fc160b9-3742-4d6f-ba88-808a5f3315b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.034675708s
Feb 18 22:01:25.124: INFO: Pod "alpine-nnp-false-2fc160b9-3742-4d6f-ba88-808a5f3315b3" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:01:25.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5104" for this suite.

• [SLOW TEST:8.181 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":1911,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:01:25.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Feb 18 22:01:25.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 18 22:01:25.521: INFO: stderr: ""
Feb 18 22:01:25.521: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:01:25.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7900" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":123,"skipped":1916,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:01:25.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:01:25.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-5850" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":124,"skipped":1942,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:01:25.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5002.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5002.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5002.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5002.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5002.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5002.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5002.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5002.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5002.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5002.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 18 22:01:37.915: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:37.921: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:37.926: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:37.948: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:37.970: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:37.974: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:37.979: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:37.983: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:37.993: INFO: Lookups using dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5002.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5002.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local jessie_udp@dns-test-service-2.dns-5002.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5002.svc.cluster.local]

Feb 18 22:01:43.001: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:43.006: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:43.009: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:43.013: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:43.026: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:43.030: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:43.033: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:43.035: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:43.040: INFO: Lookups using dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5002.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5002.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local jessie_udp@dns-test-service-2.dns-5002.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5002.svc.cluster.local]

Feb 18 22:01:48.017: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:48.023: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:48.025: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:48.028: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:48.037: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:48.039: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:48.042: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:48.046: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:48.061: INFO: Lookups using dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5002.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5002.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local jessie_udp@dns-test-service-2.dns-5002.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5002.svc.cluster.local]

Feb 18 22:01:53.002: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:53.006: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:53.009: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:53.013: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:53.025: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:53.027: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:53.030: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:53.033: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:53.038: INFO: Lookups using dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5002.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5002.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local jessie_udp@dns-test-service-2.dns-5002.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5002.svc.cluster.local]

Feb 18 22:01:58.001: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:58.007: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:58.011: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:58.016: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:58.029: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:58.034: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:58.038: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:58.041: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:01:58.050: INFO: Lookups using dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5002.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5002.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local jessie_udp@dns-test-service-2.dns-5002.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5002.svc.cluster.local]

Feb 18 22:02:03.002: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:02:03.015: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:02:03.019: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:02:03.025: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:02:03.048: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:02:03.056: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:02:03.060: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:02:03.066: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5002.svc.cluster.local from pod dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1: the server could not find the requested resource (get pods dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1)
Feb 18 22:02:03.077: INFO: Lookups using dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5002.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5002.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5002.svc.cluster.local jessie_udp@dns-test-service-2.dns-5002.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5002.svc.cluster.local]

Feb 18 22:02:08.061: INFO: DNS probes using dns-5002/dns-test-e85c4f86-21ab-4f3c-b708-9121958e58e1 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:02:08.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5002" for this suite.

• [SLOW TEST:42.459 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":125,"skipped":2012,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:02:08.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 18 22:02:08.384: INFO: Waiting up to 5m0s for pod "pod-6ec5e466-d8a7-4918-a1e7-8d6870c9bfcc" in namespace "emptydir-1162" to be "success or failure"
Feb 18 22:02:08.403: INFO: Pod "pod-6ec5e466-d8a7-4918-a1e7-8d6870c9bfcc": Phase="Pending", Reason="", readiness=false. Elapsed: 19.182833ms
Feb 18 22:02:10.410: INFO: Pod "pod-6ec5e466-d8a7-4918-a1e7-8d6870c9bfcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025562717s
Feb 18 22:02:12.416: INFO: Pod "pod-6ec5e466-d8a7-4918-a1e7-8d6870c9bfcc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031559121s
Feb 18 22:02:14.427: INFO: Pod "pod-6ec5e466-d8a7-4918-a1e7-8d6870c9bfcc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042922067s
Feb 18 22:02:16.472: INFO: Pod "pod-6ec5e466-d8a7-4918-a1e7-8d6870c9bfcc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087712496s
Feb 18 22:02:18.480: INFO: Pod "pod-6ec5e466-d8a7-4918-a1e7-8d6870c9bfcc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.096029857s
Feb 18 22:02:20.487: INFO: Pod "pod-6ec5e466-d8a7-4918-a1e7-8d6870c9bfcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.102971155s
STEP: Saw pod success
Feb 18 22:02:20.487: INFO: Pod "pod-6ec5e466-d8a7-4918-a1e7-8d6870c9bfcc" satisfied condition "success or failure"
Feb 18 22:02:20.492: INFO: Trying to get logs from node jerma-node pod pod-6ec5e466-d8a7-4918-a1e7-8d6870c9bfcc container test-container: 
STEP: delete the pod
Feb 18 22:02:20.657: INFO: Waiting for pod pod-6ec5e466-d8a7-4918-a1e7-8d6870c9bfcc to disappear
Feb 18 22:02:20.661: INFO: Pod pod-6ec5e466-d8a7-4918-a1e7-8d6870c9bfcc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:02:20.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1162" for this suite.

• [SLOW TEST:12.465 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2025,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:02:20.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-3975
STEP: creating replication controller nodeport-test in namespace services-3975
I0218 22:02:21.004487       8 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-3975, replica count: 2
I0218 22:02:24.056244       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:02:27.056726       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:02:30.057408       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:02:33.057824       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 18 22:02:33.057: INFO: Creating new exec pod
Feb 18 22:02:42.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3975 execpodfd957 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Feb 18 22:02:42.608: INFO: stderr: "I0218 22:02:42.369963    1683 log.go:172] (0xc000a9ed10) (0xc000ab0500) Create stream\nI0218 22:02:42.370396    1683 log.go:172] (0xc000a9ed10) (0xc000ab0500) Stream added, broadcasting: 1\nI0218 22:02:42.379035    1683 log.go:172] (0xc000a9ed10) Reply frame received for 1\nI0218 22:02:42.379090    1683 log.go:172] (0xc000a9ed10) (0xc0002379a0) Create stream\nI0218 22:02:42.379102    1683 log.go:172] (0xc000a9ed10) (0xc0002379a0) Stream added, broadcasting: 3\nI0218 22:02:42.380957    1683 log.go:172] (0xc000a9ed10) Reply frame received for 3\nI0218 22:02:42.381016    1683 log.go:172] (0xc000a9ed10) (0xc000a04140) Create stream\nI0218 22:02:42.381027    1683 log.go:172] (0xc000a9ed10) (0xc000a04140) Stream added, broadcasting: 5\nI0218 22:02:42.382372    1683 log.go:172] (0xc000a9ed10) Reply frame received for 5\nI0218 22:02:42.470617    1683 log.go:172] (0xc000a9ed10) Data frame received for 5\nI0218 22:02:42.470885    1683 log.go:172] (0xc000a04140) (5) Data frame handling\nI0218 22:02:42.470938    1683 log.go:172] (0xc000a04140) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0218 22:02:42.474808    1683 log.go:172] (0xc000a9ed10) Data frame received for 5\nI0218 22:02:42.474860    1683 log.go:172] (0xc000a04140) (5) Data frame handling\nI0218 22:02:42.474886    1683 log.go:172] (0xc000a04140) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0218 22:02:42.585469    1683 log.go:172] (0xc000a9ed10) Data frame received for 1\nI0218 22:02:42.585690    1683 log.go:172] (0xc000a9ed10) (0xc000a04140) Stream removed, broadcasting: 5\nI0218 22:02:42.585774    1683 log.go:172] (0xc000ab0500) (1) Data frame handling\nI0218 22:02:42.585796    1683 log.go:172] (0xc000ab0500) (1) Data frame sent\nI0218 22:02:42.585879    1683 log.go:172] (0xc000a9ed10) (0xc0002379a0) Stream removed, broadcasting: 3\nI0218 22:02:42.585922    1683 log.go:172] (0xc000a9ed10) (0xc000ab0500) Stream removed, broadcasting: 1\nI0218 22:02:42.585942    1683 log.go:172] (0xc000a9ed10) Go away received\nI0218 22:02:42.586978    1683 log.go:172] (0xc000a9ed10) (0xc000ab0500) Stream removed, broadcasting: 1\nI0218 22:02:42.586997    1683 log.go:172] (0xc000a9ed10) (0xc0002379a0) Stream removed, broadcasting: 3\nI0218 22:02:42.587004    1683 log.go:172] (0xc000a9ed10) (0xc000a04140) Stream removed, broadcasting: 5\n"
Feb 18 22:02:42.609: INFO: stdout: ""
Feb 18 22:02:42.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3975 execpodfd957 -- /bin/sh -x -c nc -zv -t -w 2 10.96.68.48 80'
Feb 18 22:02:42.986: INFO: stderr: "I0218 22:02:42.788867    1703 log.go:172] (0xc0000f46e0) (0xc0007855e0) Create stream\nI0218 22:02:42.789054    1703 log.go:172] (0xc0000f46e0) (0xc0007855e0) Stream added, broadcasting: 1\nI0218 22:02:42.792344    1703 log.go:172] (0xc0000f46e0) Reply frame received for 1\nI0218 22:02:42.792463    1703 log.go:172] (0xc0000f46e0) (0xc0006efb80) Create stream\nI0218 22:02:42.792483    1703 log.go:172] (0xc0000f46e0) (0xc0006efb80) Stream added, broadcasting: 3\nI0218 22:02:42.793769    1703 log.go:172] (0xc0000f46e0) Reply frame received for 3\nI0218 22:02:42.793794    1703 log.go:172] (0xc0000f46e0) (0xc0008fa000) Create stream\nI0218 22:02:42.793799    1703 log.go:172] (0xc0000f46e0) (0xc0008fa000) Stream added, broadcasting: 5\nI0218 22:02:42.795579    1703 log.go:172] (0xc0000f46e0) Reply frame received for 5\nI0218 22:02:42.882429    1703 log.go:172] (0xc0000f46e0) Data frame received for 5\nI0218 22:02:42.882469    1703 log.go:172] (0xc0008fa000) (5) Data frame handling\nI0218 22:02:42.882511    1703 log.go:172] (0xc0008fa000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.68.48 80\nI0218 22:02:42.888495    1703 log.go:172] (0xc0000f46e0) Data frame received for 5\nI0218 22:02:42.888757    1703 log.go:172] (0xc0008fa000) (5) Data frame handling\nI0218 22:02:42.888789    1703 log.go:172] (0xc0008fa000) (5) Data frame sent\nConnection to 10.96.68.48 80 port [tcp/http] succeeded!\nI0218 22:02:42.969269    1703 log.go:172] (0xc0000f46e0) Data frame received for 1\nI0218 22:02:42.969330    1703 log.go:172] (0xc0007855e0) (1) Data frame handling\nI0218 22:02:42.969344    1703 log.go:172] (0xc0007855e0) (1) Data frame sent\nI0218 22:02:42.969530    1703 log.go:172] (0xc0000f46e0) (0xc0006efb80) Stream removed, broadcasting: 3\nI0218 22:02:42.969599    1703 log.go:172] (0xc0000f46e0) (0xc0007855e0) Stream removed, broadcasting: 1\nI0218 22:02:42.970500    1703 log.go:172] (0xc0000f46e0) (0xc0008fa000) Stream removed, broadcasting: 5\nI0218 22:02:42.970621    1703 log.go:172] (0xc0000f46e0) (0xc0007855e0) Stream removed, broadcasting: 1\nI0218 22:02:42.970671    1703 log.go:172] (0xc0000f46e0) (0xc0006efb80) Stream removed, broadcasting: 3\nI0218 22:02:42.970705    1703 log.go:172] (0xc0000f46e0) (0xc0008fa000) Stream removed, broadcasting: 5\n"
Feb 18 22:02:42.986: INFO: stdout: ""
Feb 18 22:02:42.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3975 execpodfd957 -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 31079'
Feb 18 22:02:43.367: INFO: stderr: "I0218 22:02:43.182984    1725 log.go:172] (0xc000112d10) (0xc000659c20) Create stream\nI0218 22:02:43.183113    1725 log.go:172] (0xc000112d10) (0xc000659c20) Stream added, broadcasting: 1\nI0218 22:02:43.186975    1725 log.go:172] (0xc000112d10) Reply frame received for 1\nI0218 22:02:43.187020    1725 log.go:172] (0xc000112d10) (0xc00092a000) Create stream\nI0218 22:02:43.187035    1725 log.go:172] (0xc000112d10) (0xc00092a000) Stream added, broadcasting: 3\nI0218 22:02:43.188334    1725 log.go:172] (0xc000112d10) Reply frame received for 3\nI0218 22:02:43.188351    1725 log.go:172] (0xc000112d10) (0xc000659cc0) Create stream\nI0218 22:02:43.188357    1725 log.go:172] (0xc000112d10) (0xc000659cc0) Stream added, broadcasting: 5\nI0218 22:02:43.189383    1725 log.go:172] (0xc000112d10) Reply frame received for 5\nI0218 22:02:43.241774    1725 log.go:172] (0xc000112d10) Data frame received for 5\nI0218 22:02:43.241820    1725 log.go:172] (0xc000659cc0) (5) Data frame handling\nI0218 22:02:43.241840    1725 log.go:172] (0xc000659cc0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 31079\nI0218 22:02:43.246745    1725 log.go:172] (0xc000112d10) Data frame received for 5\nI0218 22:02:43.246814    1725 log.go:172] (0xc000659cc0) (5) Data frame handling\nI0218 22:02:43.246872    1725 log.go:172] (0xc000659cc0) (5) Data frame sent\nConnection to 10.96.2.250 31079 port [tcp/31079] succeeded!\nI0218 22:02:43.352994    1725 log.go:172] (0xc000112d10) (0xc000659cc0) Stream removed, broadcasting: 5\nI0218 22:02:43.353289    1725 log.go:172] (0xc000112d10) Data frame received for 1\nI0218 22:02:43.353434    1725 log.go:172] (0xc000112d10) (0xc00092a000) Stream removed, broadcasting: 3\nI0218 22:02:43.353564    1725 log.go:172] (0xc000659c20) (1) Data frame handling\nI0218 22:02:43.353610    1725 log.go:172] (0xc000659c20) (1) Data frame sent\nI0218 22:02:43.353619    1725 log.go:172] (0xc000112d10) (0xc000659c20) Stream removed, broadcasting: 1\nI0218 22:02:43.353641    1725 log.go:172] (0xc000112d10) Go away received\nI0218 22:02:43.355002    1725 log.go:172] (0xc000112d10) (0xc000659c20) Stream removed, broadcasting: 1\nI0218 22:02:43.355012    1725 log.go:172] (0xc000112d10) (0xc00092a000) Stream removed, broadcasting: 3\nI0218 22:02:43.355016    1725 log.go:172] (0xc000112d10) (0xc000659cc0) Stream removed, broadcasting: 5\n"
Feb 18 22:02:43.367: INFO: stdout: ""
Feb 18 22:02:43.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3975 execpodfd957 -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 31079'
Feb 18 22:02:43.740: INFO: stderr: "I0218 22:02:43.561606    1747 log.go:172] (0xc000b82dc0) (0xc000b66280) Create stream\nI0218 22:02:43.561728    1747 log.go:172] (0xc000b82dc0) (0xc000b66280) Stream added, broadcasting: 1\nI0218 22:02:43.578242    1747 log.go:172] (0xc000b82dc0) Reply frame received for 1\nI0218 22:02:43.578275    1747 log.go:172] (0xc000b82dc0) (0xc0006be6e0) Create stream\nI0218 22:02:43.578286    1747 log.go:172] (0xc000b82dc0) (0xc0006be6e0) Stream added, broadcasting: 3\nI0218 22:02:43.579654    1747 log.go:172] (0xc000b82dc0) Reply frame received for 3\nI0218 22:02:43.579685    1747 log.go:172] (0xc000b82dc0) (0xc00054d4a0) Create stream\nI0218 22:02:43.579692    1747 log.go:172] (0xc000b82dc0) (0xc00054d4a0) Stream added, broadcasting: 5\nI0218 22:02:43.580623    1747 log.go:172] (0xc000b82dc0) Reply frame received for 5\nI0218 22:02:43.651277    1747 log.go:172] (0xc000b82dc0) Data frame received for 5\nI0218 22:02:43.651389    1747 log.go:172] (0xc00054d4a0) (5) Data frame handling\nI0218 22:02:43.651422    1747 log.go:172] (0xc00054d4a0) (5) Data frame sent\nI0218 22:02:43.651432    1747 log.go:172] (0xc000b82dc0) Data frame received for 5\nI0218 22:02:43.651441    1747 log.go:172] (0xc00054d4a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.1.234 31079\nI0218 22:02:43.651505    1747 log.go:172] (0xc00054d4a0) (5) Data frame sent\nI0218 22:02:43.654196    1747 log.go:172] (0xc000b82dc0) Data frame received for 5\nI0218 22:02:43.654223    1747 log.go:172] (0xc00054d4a0) (5) Data frame handling\nI0218 22:02:43.654240    1747 log.go:172] (0xc00054d4a0) (5) Data frame sent\nConnection to 10.96.1.234 31079 port [tcp/31079] succeeded!\nI0218 22:02:43.725681    1747 log.go:172] (0xc000b82dc0) Data frame received for 1\nI0218 22:02:43.725763    1747 log.go:172] (0xc000b66280) (1) Data frame handling\nI0218 22:02:43.725779    1747 log.go:172] (0xc000b66280) (1) Data frame sent\nI0218 22:02:43.725800    1747 log.go:172] (0xc000b82dc0) (0xc000b66280) Stream removed, broadcasting: 1\nI0218 22:02:43.726183    1747 log.go:172] (0xc000b82dc0) (0xc0006be6e0) Stream removed, broadcasting: 3\nI0218 22:02:43.726490    1747 log.go:172] (0xc000b82dc0) (0xc00054d4a0) Stream removed, broadcasting: 5\nI0218 22:02:43.726828    1747 log.go:172] (0xc000b82dc0) (0xc000b66280) Stream removed, broadcasting: 1\nI0218 22:02:43.726851    1747 log.go:172] (0xc000b82dc0) (0xc0006be6e0) Stream removed, broadcasting: 3\nI0218 22:02:43.726871    1747 log.go:172] (0xc000b82dc0) (0xc00054d4a0) Stream removed, broadcasting: 5\nI0218 22:02:43.727004    1747 log.go:172] (0xc000b82dc0) Go away received\n"
Feb 18 22:02:43.740: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:02:43.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3975" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:23.081 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":127,"skipped":2045,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:02:43.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:02:43.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8642" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":128,"skipped":2050,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:02:43.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Feb 18 22:02:44.069: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Feb 18 22:02:44.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6756'
Feb 18 22:02:44.496: INFO: stderr: ""
Feb 18 22:02:44.496: INFO: stdout: "service/agnhost-slave created\n"
Feb 18 22:02:44.498: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Feb 18 22:02:44.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6756'
Feb 18 22:02:44.938: INFO: stderr: ""
Feb 18 22:02:44.938: INFO: stdout: "service/agnhost-master created\n"
Feb 18 22:02:44.939: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 18 22:02:44.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6756'
Feb 18 22:02:45.404: INFO: stderr: ""
Feb 18 22:02:45.404: INFO: stdout: "service/frontend created\n"
Feb 18 22:02:45.405: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Feb 18 22:02:45.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6756'
Feb 18 22:02:45.914: INFO: stderr: ""
Feb 18 22:02:45.914: INFO: stdout: "deployment.apps/frontend created\n"
Feb 18 22:02:45.915: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 18 22:02:45.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6756'
Feb 18 22:02:46.437: INFO: stderr: ""
Feb 18 22:02:46.437: INFO: stdout: "deployment.apps/agnhost-master created\n"
Feb 18 22:02:46.438: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 18 22:02:46.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6756'
Feb 18 22:02:48.214: INFO: stderr: ""
Feb 18 22:02:48.214: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Feb 18 22:02:48.214: INFO: Waiting for all frontend pods to be Running.
Feb 18 22:03:18.268: INFO: Waiting for frontend to serve content.
Feb 18 22:03:18.335: INFO: Trying to add a new entry to the guestbook.
Feb 18 22:03:18.373: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:03:23.401: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:03:28.431: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:03:33.455: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:03:38.487: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:03:43.506: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:03:48.526: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:03:53.548: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:03:58.576: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:04:03.600: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:04:08.623: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:04:13.654: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:04:18.672: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:04:23.693: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:04:28.711: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:04:33.738: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:04:38.755: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:04:43.885: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:04:48.918: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:04:53.944: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:04:58.961: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:05:03.982: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:05:09.007: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:05:14.033: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:05:19.065: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:05:24.088: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:05:29.111: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:05:34.127: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:05:39.144: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:05:44.159: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:05:49.174: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:05:54.193: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:05:59.209: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:06:04.226: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:06:09.253: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:06:14.281: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 18 22:06:19.282: FAIL: Cannot added new entry in 180 seconds.

Full Stack Trace
k8s.io/kubernetes/test/e2e/kubectl.validateGuestbookApp(0x5424e60, 0xc002781340, 0xc0032769a0, 0xc)
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315 +0x551
k8s.io/kubernetes/test/e2e/kubectl.glob..func2.7.2()
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:417 +0x165
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0017c3600)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a
k8s.io/kubernetes/test/e2e.TestE2E(0xc0017c3600)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b
testing.tRunner(0xc0017c3600, 0x4c30de8)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
STEP: using delete to clean up resources
Feb 18 22:06:19.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6756'
Feb 18 22:06:19.597: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 18 22:06:19.597: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 18 22:06:19.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6756'
Feb 18 22:06:19.802: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 18 22:06:19.802: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 18 22:06:19.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6756'
Feb 18 22:06:20.160: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 18 22:06:20.160: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 18 22:06:20.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6756'
Feb 18 22:06:20.341: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 18 22:06:20.342: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 18 22:06:20.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6756'
Feb 18 22:06:20.536: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 18 22:06:20.537: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 18 22:06:20.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6756'
Feb 18 22:06:20.935: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 18 22:06:20.935: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "kubectl-6756".
STEP: Found 37 events.
Feb 18 22:06:20.959: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-master-74c46fb7d4-5bbl6: {default-scheduler } Scheduled: Successfully assigned kubectl-6756/agnhost-master-74c46fb7d4-5bbl6 to jerma-node
Feb 18 22:06:20.960: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-6qtgk: {default-scheduler } Scheduled: Successfully assigned kubectl-6756/agnhost-slave-774cfc759f-6qtgk to jerma-node
Feb 18 22:06:20.960: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-gllmv: {default-scheduler } Scheduled: Successfully assigned kubectl-6756/agnhost-slave-774cfc759f-gllmv to jerma-server-mvvl6gufaqub
Feb 18 22:06:20.960: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-9fm9f: {default-scheduler } Scheduled: Successfully assigned kubectl-6756/frontend-6c5f89d5d4-9fm9f to jerma-node
Feb 18 22:06:20.960: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-btms4: {default-scheduler } Scheduled: Successfully assigned kubectl-6756/frontend-6c5f89d5d4-btms4 to jerma-node
Feb 18 22:06:20.960: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-hng2b: {default-scheduler } Scheduled: Successfully assigned kubectl-6756/frontend-6c5f89d5d4-hng2b to jerma-server-mvvl6gufaqub
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:02:45 +0000 UTC - event for frontend: {deployment-controller } ScalingReplicaSet: Scaled up replica set frontend-6c5f89d5d4 to 3
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:02:45 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-9fm9f
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:02:46 +0000 UTC - event for agnhost-master: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-master-74c46fb7d4 to 1
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:02:46 +0000 UTC - event for agnhost-master-74c46fb7d4: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-master-74c46fb7d4-5bbl6
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:02:46 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-hng2b
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:02:46 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-btms4
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:02:48 +0000 UTC - event for agnhost-slave: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-slave-774cfc759f to 2
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:02:48 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-6qtgk
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:02:48 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-gllmv
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:02:54 +0000 UTC - event for frontend-6c5f89d5d4-9fm9f: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:02:55 +0000 UTC - event for frontend-6c5f89d5d4-hng2b: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:02:58 +0000 UTC - event for agnhost-slave-774cfc759f-gllmv: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:03:00 +0000 UTC - event for frontend-6c5f89d5d4-btms4: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:03:01 +0000 UTC - event for agnhost-master-74c46fb7d4-5bbl6: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:03:04 +0000 UTC - event for agnhost-slave-774cfc759f-gllmv: {kubelet jerma-server-mvvl6gufaqub} Created: Created container slave
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:03:04 +0000 UTC - event for frontend-6c5f89d5d4-hng2b: {kubelet jerma-server-mvvl6gufaqub} Created: Created container guestbook-frontend
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:03:06 +0000 UTC - event for agnhost-slave-774cfc759f-6qtgk: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:03:07 +0000 UTC - event for agnhost-slave-774cfc759f-gllmv: {kubelet jerma-server-mvvl6gufaqub} Started: Started container slave
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:03:07 +0000 UTC - event for frontend-6c5f89d5d4-hng2b: {kubelet jerma-server-mvvl6gufaqub} Started: Started container guestbook-frontend
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:03:12 +0000 UTC - event for frontend-6c5f89d5d4-9fm9f: {kubelet jerma-node} Created: Created container guestbook-frontend
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:03:14 +0000 UTC - event for agnhost-master-74c46fb7d4-5bbl6: {kubelet jerma-node} Started: Started container master
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:03:14 +0000 UTC - event for agnhost-master-74c46fb7d4-5bbl6: {kubelet jerma-node} Created: Created container master
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:03:14 +0000 UTC - event for agnhost-slave-774cfc759f-6qtgk: {kubelet jerma-node} Started: Started container slave
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:03:14 +0000 UTC - event for agnhost-slave-774cfc759f-6qtgk: {kubelet jerma-node} Created: Created container slave
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:03:14 +0000 UTC - event for frontend-6c5f89d5d4-9fm9f: {kubelet jerma-node} Started: Started container guestbook-frontend
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:03:14 +0000 UTC - event for frontend-6c5f89d5d4-btms4: {kubelet jerma-node} Started: Started container guestbook-frontend
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:03:14 +0000 UTC - event for frontend-6c5f89d5d4-btms4: {kubelet jerma-node} Created: Created container guestbook-frontend
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:06:20 +0000 UTC - event for agnhost-master-74c46fb7d4-5bbl6: {kubelet jerma-node} Killing: Stopping container master
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:06:20 +0000 UTC - event for frontend-6c5f89d5d4-9fm9f: {kubelet jerma-node} Killing: Stopping container guestbook-frontend
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:06:20 +0000 UTC - event for frontend-6c5f89d5d4-btms4: {kubelet jerma-node} Killing: Stopping container guestbook-frontend
Feb 18 22:06:20.960: INFO: At 2020-02-18 22:06:20 +0000 UTC - event for frontend-6c5f89d5d4-hng2b: {kubelet jerma-server-mvvl6gufaqub} Killing: Stopping container guestbook-frontend
Feb 18 22:06:21.017: INFO: POD                              NODE                       PHASE    GRACE  CONDITIONS
Feb 18 22:06:21.018: INFO: agnhost-master-74c46fb7d4-5bbl6  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:02:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:03:15 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:03:15 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:02:46 +0000 UTC  }]
Feb 18 22:06:21.018: INFO: agnhost-slave-774cfc759f-6qtgk   jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:02:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:03:15 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:03:15 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:02:48 +0000 UTC  }]
Feb 18 22:06:21.018: INFO: agnhost-slave-774cfc759f-gllmv   jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:02:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:03:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:03:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:02:48 +0000 UTC  }]
Feb 18 22:06:21.018: INFO: frontend-6c5f89d5d4-9fm9f        jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:02:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:03:15 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:03:15 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:02:45 +0000 UTC  }]
Feb 18 22:06:21.018: INFO: frontend-6c5f89d5d4-btms4        jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:02:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:03:15 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:03:15 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:02:46 +0000 UTC  }]
Feb 18 22:06:21.018: INFO: frontend-6c5f89d5d4-hng2b        jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:02:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:03:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:03:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:02:46 +0000 UTC  }]
Feb 18 22:06:21.018: INFO: 
Feb 18 22:06:21.120: INFO: 
Logging node info for node jerma-node
Feb 18 22:06:21.159: INFO: Node Info: &Node{ObjectMeta:{jerma-node   /api/v1/nodes/jerma-node 6236bfb4-6b64-4c0a-82c6-f768ceeab07c 9272192 0 2020-01-04 11:59:52 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 12:00:49 +0000 UTC,LastTransitionTime:2020-01-04 12:00:49 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-18 22:02:05 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-18 22:02:05 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-18 22:02:05 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-18 22:02:05 +0000 UTC,LastTransitionTime:2020-01-04 12:00:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.250,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bdc16344252549dd902c3a5d68b22f41,SystemUUID:BDC16344-2525-49DD-902C-3A5D68B22F41,BootID:eec61fc4-8bf6-487f-8f93-ea9731fe757a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Feb 18 22:06:21.162: INFO: 
Logging kubelet events for node jerma-node
Feb 18 22:06:21.195: INFO: 
Logging pods the kubelet thinks is on node jerma-node
Feb 18 22:06:22.663: INFO: agnhost-slave-774cfc759f-6qtgk started at 2020-02-18 22:02:49 +0000 UTC (0+1 container statuses recorded)
Feb 18 22:06:22.663: INFO: 	Container slave ready: true, restart count 0
Feb 18 22:06:22.663: INFO: frontend-6c5f89d5d4-btms4 started at 2020-02-18 22:02:46 +0000 UTC (0+1 container statuses recorded)
Feb 18 22:06:22.663: INFO: 	Container guestbook-frontend ready: true, restart count 0
Feb 18 22:06:22.663: INFO: frontend-6c5f89d5d4-9fm9f started at 2020-02-18 22:02:46 +0000 UTC (0+1 container statuses recorded)
Feb 18 22:06:22.663: INFO: 	Container guestbook-frontend ready: true, restart count 0
Feb 18 22:06:22.663: INFO: kube-proxy-dsf66 started at 2020-01-04 11:59:52 +0000 UTC (0+1 container statuses recorded)
Feb 18 22:06:22.663: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 18 22:06:22.663: INFO: agnhost-master-74c46fb7d4-5bbl6 started at 2020-02-18 22:02:48 +0000 UTC (0+1 container statuses recorded)
Feb 18 22:06:22.663: INFO: 	Container master ready: true, restart count 0
Feb 18 22:06:22.663: INFO: weave-net-kz8lv started at 2020-01-04 11:59:52 +0000 UTC (0+2 container statuses recorded)
Feb 18 22:06:22.663: INFO: 	Container weave ready: true, restart count 1
Feb 18 22:06:22.663: INFO: 	Container weave-npc ready: true, restart count 0
W0218 22:06:22.668646       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 18 22:06:22.709: INFO: 
Latency metrics for node jerma-node
Feb 18 22:06:22.709: INFO: 
Logging node info for node jerma-server-mvvl6gufaqub
Feb 18 22:06:22.714: INFO: Node Info: &Node{ObjectMeta:{jerma-server-mvvl6gufaqub   /api/v1/nodes/jerma-server-mvvl6gufaqub a2a7fe9b-7d59-43f1-bbe3-2a69f99cabd2 9272679 0 2020-01-04 11:47:40 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-mvvl6gufaqub kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 11:48:36 +0000 UTC,LastTransitionTime:2020-01-04 11:48:36 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-18 22:04:01 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-18 22:04:01 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-18 22:04:01 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-18 22:04:01 +0000 UTC,LastTransitionTime:2020-01-04 11:48:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.1.234,},NodeAddress{Type:Hostname,Address:jerma-server-mvvl6gufaqub,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f0346566ad342efb0c9f55677d0a8ea,SystemUUID:3F034656-6AD3-42EF-B0C9-F55677D0A8EA,BootID:87dae5d0-e99d-4d31-a4e7-fbd07d84e951,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[ollivier/functest-kubernetes-security@sha256:e07875af6d375759fd233dc464382bb51d2464f6ae50a60625e41226eb1f87be ollivier/functest-kubernetes-security:latest],SizeBytes:1118568659,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662 k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:170957331,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0438efb5098a2ca634ea8c6b0d804742b733d0d13fd53cf62c73e32c659a3c39 k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:160877075,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:5215c4216a65f7e76c1895ba951a12dc1c947904a91810fc66a544ff1d7e87db k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:94431763,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Feb 18 22:06:22.715: INFO: 
Logging kubelet events for node jerma-server-mvvl6gufaqub
Feb 18 22:06:22.719: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub
Feb 18 22:06:22.745: INFO: etcd-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Feb 18 22:06:22.745: INFO: 	Container etcd ready: true, restart count 1
Feb 18 22:06:22.745: INFO: frontend-6c5f89d5d4-hng2b started at 2020-02-18 22:02:46 +0000 UTC (0+1 container statuses recorded)
Feb 18 22:06:22.745: INFO: 	Container guestbook-frontend ready: true, restart count 0
Feb 18 22:06:22.745: INFO: kube-apiserver-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Feb 18 22:06:22.745: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 18 22:06:22.745: INFO: coredns-6955765f44-bwd85 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Feb 18 22:06:22.745: INFO: 	Container coredns ready: true, restart count 0
Feb 18 22:06:22.745: INFO: coredns-6955765f44-bhnn4 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Feb 18 22:06:22.745: INFO: 	Container coredns ready: true, restart count 0
Feb 18 22:06:22.745: INFO: kube-proxy-chkps started at 2020-01-04 11:48:11 +0000 UTC (0+1 container statuses recorded)
Feb 18 22:06:22.745: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 18 22:06:22.745: INFO: weave-net-z6tjf started at 2020-01-04 11:48:11 +0000 UTC (0+2 container statuses recorded)
Feb 18 22:06:22.745: INFO: 	Container weave ready: true, restart count 0
Feb 18 22:06:22.745: INFO: 	Container weave-npc ready: true, restart count 0
Feb 18 22:06:22.745: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Feb 18 22:06:22.745: INFO: 	Container kube-controller-manager ready: true, restart count 14
Feb 18 22:06:22.745: INFO: agnhost-slave-774cfc759f-gllmv started at 2020-02-18 22:02:49 +0000 UTC (0+1 container statuses recorded)
Feb 18 22:06:22.745: INFO: 	Container slave ready: true, restart count 0
Feb 18 22:06:22.745: INFO: kube-scheduler-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Feb 18 22:06:22.745: INFO: 	Container kube-scheduler ready: true, restart count 18
W0218 22:06:22.857786       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 18 22:06:22.933: INFO: 
Latency metrics for node jerma-server-mvvl6gufaqub
Feb 18 22:06:22.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6756" for this suite.

• Failure [219.005 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:385
    should create and stop a working application  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

    Feb 18 22:06:19.282: Cannot added new entry in 180 seconds.

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":128,"skipped":2069,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:06:22.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Feb 18 22:06:37.271: INFO: Pod pod-hostip-68fff39d-4152-4a78-b4c3-010f8f5d78d6 has hostIP: 10.96.2.250
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:06:37.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5542" for this suite.

• [SLOW TEST:14.338 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2075,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:06:37.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-acd092df-f50b-4408-bcb7-b305ebfed5ec
STEP: Creating a pod to test consume secrets
Feb 18 22:06:37.392: INFO: Waiting up to 5m0s for pod "pod-secrets-72d207d4-0187-487c-961a-9b9ac0ae15ea" in namespace "secrets-7243" to be "success or failure"
Feb 18 22:06:37.430: INFO: Pod "pod-secrets-72d207d4-0187-487c-961a-9b9ac0ae15ea": Phase="Pending", Reason="", readiness=false. Elapsed: 38.396351ms
Feb 18 22:06:39.437: INFO: Pod "pod-secrets-72d207d4-0187-487c-961a-9b9ac0ae15ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045712886s
Feb 18 22:06:41.447: INFO: Pod "pod-secrets-72d207d4-0187-487c-961a-9b9ac0ae15ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055162264s
Feb 18 22:06:43.484: INFO: Pod "pod-secrets-72d207d4-0187-487c-961a-9b9ac0ae15ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092035176s
Feb 18 22:06:45.493: INFO: Pod "pod-secrets-72d207d4-0187-487c-961a-9b9ac0ae15ea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100974581s
Feb 18 22:06:47.508: INFO: Pod "pod-secrets-72d207d4-0187-487c-961a-9b9ac0ae15ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11658033s
STEP: Saw pod success
Feb 18 22:06:47.509: INFO: Pod "pod-secrets-72d207d4-0187-487c-961a-9b9ac0ae15ea" satisfied condition "success or failure"
Feb 18 22:06:47.522: INFO: Trying to get logs from node jerma-node pod pod-secrets-72d207d4-0187-487c-961a-9b9ac0ae15ea container secret-volume-test: 
STEP: delete the pod
Feb 18 22:06:47.569: INFO: Waiting for pod pod-secrets-72d207d4-0187-487c-961a-9b9ac0ae15ea to disappear
Feb 18 22:06:47.576: INFO: Pod pod-secrets-72d207d4-0187-487c-961a-9b9ac0ae15ea no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:06:47.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7243" for this suite.

• [SLOW TEST:10.356 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2075,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:06:47.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:06:58.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3590" for this suite.

• [SLOW TEST:11.265 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":131,"skipped":2089,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:06:58.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:07:08.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2117" for this suite.

• [SLOW TEST:9.155 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":132,"skipped":2090,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:07:08.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:07:08.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3372" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2115,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:07:08.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-1641d165-ecb9-4d08-9359-23266e90a5d4
STEP: Creating a pod to test consume configMaps
Feb 18 22:07:08.416: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-50d61919-bb00-4637-aa2d-cc90cd26c70c" in namespace "projected-1657" to be "success or failure"
Feb 18 22:07:08.532: INFO: Pod "pod-projected-configmaps-50d61919-bb00-4637-aa2d-cc90cd26c70c": Phase="Pending", Reason="", readiness=false. Elapsed: 115.432699ms
Feb 18 22:07:10.544: INFO: Pod "pod-projected-configmaps-50d61919-bb00-4637-aa2d-cc90cd26c70c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128282243s
Feb 18 22:07:12.556: INFO: Pod "pod-projected-configmaps-50d61919-bb00-4637-aa2d-cc90cd26c70c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140320866s
Feb 18 22:07:14.568: INFO: Pod "pod-projected-configmaps-50d61919-bb00-4637-aa2d-cc90cd26c70c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15187426s
Feb 18 22:07:16.578: INFO: Pod "pod-projected-configmaps-50d61919-bb00-4637-aa2d-cc90cd26c70c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.161629718s
Feb 18 22:07:18.589: INFO: Pod "pod-projected-configmaps-50d61919-bb00-4637-aa2d-cc90cd26c70c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.17249112s
Feb 18 22:07:20.596: INFO: Pod "pod-projected-configmaps-50d61919-bb00-4637-aa2d-cc90cd26c70c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.180176536s
STEP: Saw pod success
Feb 18 22:07:20.597: INFO: Pod "pod-projected-configmaps-50d61919-bb00-4637-aa2d-cc90cd26c70c" satisfied condition "success or failure"
Feb 18 22:07:20.600: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-50d61919-bb00-4637-aa2d-cc90cd26c70c container projected-configmap-volume-test: 
STEP: delete the pod
Feb 18 22:07:20.638: INFO: Waiting for pod pod-projected-configmaps-50d61919-bb00-4637-aa2d-cc90cd26c70c to disappear
Feb 18 22:07:20.651: INFO: Pod pod-projected-configmaps-50d61919-bb00-4637-aa2d-cc90cd26c70c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:07:20.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1657" for this suite.

• [SLOW TEST:12.407 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2116,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:07:20.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Feb 18 22:07:20.770: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:07:34.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7363" for this suite.

• [SLOW TEST:13.523 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":135,"skipped":2121,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:07:34.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 22:07:35.244: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:07:37.263: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:07:39.249: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:07:41.264: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:07:43.251: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660455, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 22:07:46.289: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 22:07:46.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-315-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:07:47.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7788" for this suite.
STEP: Destroying namespace "webhook-7788-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.578 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":136,"skipped":2135,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:07:47.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1768
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 18 22:07:47.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7374'
Feb 18 22:07:48.091: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 18 22:07:48.091: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1773
Feb 18 22:07:48.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-7374'
Feb 18 22:07:48.310: INFO: stderr: ""
Feb 18 22:07:48.311: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:07:48.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7374" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":278,"completed":137,"skipped":2140,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:07:48.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:08:29.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8732" for this suite.
STEP: Destroying namespace "nsdeletetest-3545" for this suite.
Feb 18 22:08:29.067: INFO: Namespace nsdeletetest-3545 was already deleted
STEP: Destroying namespace "nsdeletetest-7706" for this suite.

• [SLOW TEST:40.718 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":138,"skipped":2143,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:08:29.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-2591
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2591 to expose endpoints map[]
Feb 18 22:08:29.201: INFO: Get endpoints failed (3.061921ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb 18 22:08:30.210: INFO: successfully validated that service endpoint-test2 in namespace services-2591 exposes endpoints map[] (1.012102153s elapsed)
STEP: Creating pod pod1 in namespace services-2591
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2591 to expose endpoints map[pod1:[80]]
Feb 18 22:08:34.367: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.148357691s elapsed, will retry)
Feb 18 22:08:36.905: INFO: successfully validated that service endpoint-test2 in namespace services-2591 exposes endpoints map[pod1:[80]] (6.686400567s elapsed)
STEP: Creating pod pod2 in namespace services-2591
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2591 to expose endpoints map[pod1:[80] pod2:[80]]
Feb 18 22:08:41.311: INFO: Unexpected endpoints: found map[27a1d73d-ff6a-4357-b09b-e31c8b412a31:[80]], expected map[pod1:[80] pod2:[80]] (4.392375834s elapsed, will retry)
Feb 18 22:08:43.405: INFO: successfully validated that service endpoint-test2 in namespace services-2591 exposes endpoints map[pod1:[80] pod2:[80]] (6.485689079s elapsed)
STEP: Deleting pod pod1 in namespace services-2591
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2591 to expose endpoints map[pod2:[80]]
Feb 18 22:08:44.526: INFO: successfully validated that service endpoint-test2 in namespace services-2591 exposes endpoints map[pod2:[80]] (1.112878549s elapsed)
STEP: Deleting pod pod2 in namespace services-2591
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2591 to expose endpoints map[]
Feb 18 22:08:44.556: INFO: successfully validated that service endpoint-test2 in namespace services-2591 exposes endpoints map[] (19.194224ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:08:44.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2591" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:15.607 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":139,"skipped":2166,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:08:44.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 22:08:47.211: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 18 22:08:49.325: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660527, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660527, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660527, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660527, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:08:51.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660527, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660527, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660527, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660527, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:08:53.333: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660527, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660527, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660527, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660527, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:08:55.334: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660527, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660527, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660527, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660527, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 22:08:58.400: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:09:08.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7310" for this suite.
STEP: Destroying namespace "webhook-7310-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:24.120 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":140,"skipped":2166,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:09:08.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7136.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7136.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 18 22:09:23.016: INFO: DNS probes using dns-7136/dns-test-b77e6e66-6165-4768-be09-a6d4eb111482 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:09:23.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7136" for this suite.

• [SLOW TEST:14.337 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":141,"skipped":2166,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:09:23.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 22:09:23.300: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 18 22:09:23.357: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 18 22:09:28.366: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 18 22:09:36.388: INFO: Creating deployment "test-rolling-update-deployment"
Feb 18 22:09:36.394: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 18 22:09:36.409: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 18 22:09:38.421: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 18 22:09:38.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660576, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660576, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660576, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660576, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:09:40.433: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660576, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660576, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660576, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660576, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:09:42.433: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660576, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660576, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660576, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660576, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:09:44.501: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb 18 22:09:44.523: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-3984 /apis/apps/v1/namespaces/deployment-3984/deployments/test-rolling-update-deployment 236d793e-f363-4b2a-85e4-d79ec76c82a6 9274088 1 2020-02-18 22:09:36 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002db5148  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-18 22:09:36 +0000 UTC,LastTransitionTime:2020-02-18 22:09:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-02-18 22:09:44 +0000 UTC,LastTransitionTime:2020-02-18 22:09:36 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb 18 22:09:44.526: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-3984 /apis/apps/v1/namespaces/deployment-3984/replicasets/test-rolling-update-deployment-67cf4f6444 6d4064cd-dd63-433d-a4d2-850319de030f 9274076 1 2020-02-18 22:09:36 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 236d793e-f363-4b2a-85e4-d79ec76c82a6 0xc002273497 0xc002273498}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0022735c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 18 22:09:44.526: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 18 22:09:44.526: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-3984 /apis/apps/v1/namespaces/deployment-3984/replicasets/test-rolling-update-controller 3c1e36f3-6fe6-4493-9b00-7b2178b3716a 9274087 2 2020-02-18 22:09:23 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 236d793e-f363-4b2a-85e4-d79ec76c82a6 0xc002273287 0xc002273288}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0022732e8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 18 22:09:44.530: INFO: Pod "test-rolling-update-deployment-67cf4f6444-jq7kf" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-jq7kf test-rolling-update-deployment-67cf4f6444- deployment-3984 /api/v1/namespaces/deployment-3984/pods/test-rolling-update-deployment-67cf4f6444-jq7kf 2a58c0a4-94d8-4525-9512-3805ee89b65b 9274075 0 2020-02-18 22:09:36 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 6d4064cd-dd63-433d-a4d2-850319de030f 0xc002273a17 0xc002273a18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jqdh6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jqdh6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jqdh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 22:09:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 22:09:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 22:09:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 22:09:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-18 22:09:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 22:09:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://7decd753ab6319bd36fe04b9cb8078bda47ba14c16b71f0854f1d41d5c195d6b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:09:44.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3984" for this suite.

• [SLOW TEST:21.401 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":142,"skipped":2185,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:09:44.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 18 22:09:44.747: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a929407-7420-4d87-a92f-16f249a256b2" in namespace "downward-api-8346" to be "success or failure"
Feb 18 22:09:44.771: INFO: Pod "downwardapi-volume-1a929407-7420-4d87-a92f-16f249a256b2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.339017ms
Feb 18 22:09:46.778: INFO: Pod "downwardapi-volume-1a929407-7420-4d87-a92f-16f249a256b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030523291s
Feb 18 22:09:48.784: INFO: Pod "downwardapi-volume-1a929407-7420-4d87-a92f-16f249a256b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037049091s
Feb 18 22:09:50.907: INFO: Pod "downwardapi-volume-1a929407-7420-4d87-a92f-16f249a256b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159207411s
Feb 18 22:09:52.918: INFO: Pod "downwardapi-volume-1a929407-7420-4d87-a92f-16f249a256b2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170515775s
Feb 18 22:09:54.926: INFO: Pod "downwardapi-volume-1a929407-7420-4d87-a92f-16f249a256b2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.179088393s
Feb 18 22:09:56.972: INFO: Pod "downwardapi-volume-1a929407-7420-4d87-a92f-16f249a256b2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.22481236s
Feb 18 22:09:59.132: INFO: Pod "downwardapi-volume-1a929407-7420-4d87-a92f-16f249a256b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.385060897s
STEP: Saw pod success
Feb 18 22:09:59.132: INFO: Pod "downwardapi-volume-1a929407-7420-4d87-a92f-16f249a256b2" satisfied condition "success or failure"
Feb 18 22:09:59.138: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-1a929407-7420-4d87-a92f-16f249a256b2 container client-container: 
STEP: delete the pod
Feb 18 22:09:59.459: INFO: Waiting for pod downwardapi-volume-1a929407-7420-4d87-a92f-16f249a256b2 to disappear
Feb 18 22:09:59.469: INFO: Pod downwardapi-volume-1a929407-7420-4d87-a92f-16f249a256b2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:09:59.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8346" for this suite.

• [SLOW TEST:14.962 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2220,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:09:59.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8900.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8900.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8900.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8900.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8900.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8900.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 18 22:10:11.735: INFO: DNS probes using dns-8900/dns-test-d99ff1f2-049f-4fa8-a50d-74e17214dcf0 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:10:11.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8900" for this suite.

• [SLOW TEST:12.434 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":144,"skipped":2242,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:10:11.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:10:18.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5584" for this suite.
STEP: Destroying namespace "nsdeletetest-5376" for this suite.
Feb 18 22:10:18.374: INFO: Namespace nsdeletetest-5376 was already deleted
STEP: Destroying namespace "nsdeletetest-4844" for this suite.

• [SLOW TEST:6.440 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":145,"skipped":2363,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:10:18.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-9092d8e0-f0ed-4e07-a77d-025ff0905da2
STEP: Creating configMap with name cm-test-opt-upd-8aa4d5eb-193d-4892-80d3-fd4a2894c1e5
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-9092d8e0-f0ed-4e07-a77d-025ff0905da2
STEP: Updating configmap cm-test-opt-upd-8aa4d5eb-193d-4892-80d3-fd4a2894c1e5
STEP: Creating configMap with name cm-test-opt-create-0840107c-d2c9-4c59-b295-60f5a87bc4ef
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:10:32.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7603" for this suite.

• [SLOW TEST:14.379 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2393,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:10:32.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 18 22:10:46.048: INFO: Successfully updated pod "pod-update-17321091-d6a7-413d-84b9-78c48e38f79b"
STEP: verifying the updated pod is in kubernetes
Feb 18 22:10:46.067: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:10:46.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9771" for this suite.

• [SLOW TEST:13.314 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2410,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:10:46.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Feb 18 22:10:46.217: INFO: Waiting up to 5m0s for pod "client-containers-c1ef8160-6ddf-47b3-861b-f2f33a0de659" in namespace "containers-8383" to be "success or failure"
Feb 18 22:10:46.224: INFO: Pod "client-containers-c1ef8160-6ddf-47b3-861b-f2f33a0de659": Phase="Pending", Reason="", readiness=false. Elapsed: 6.733342ms
Feb 18 22:10:48.231: INFO: Pod "client-containers-c1ef8160-6ddf-47b3-861b-f2f33a0de659": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014010115s
Feb 18 22:10:50.238: INFO: Pod "client-containers-c1ef8160-6ddf-47b3-861b-f2f33a0de659": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021013097s
Feb 18 22:10:52.245: INFO: Pod "client-containers-c1ef8160-6ddf-47b3-861b-f2f33a0de659": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027458954s
Feb 18 22:10:54.256: INFO: Pod "client-containers-c1ef8160-6ddf-47b3-861b-f2f33a0de659": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038316557s
Feb 18 22:10:56.261: INFO: Pod "client-containers-c1ef8160-6ddf-47b3-861b-f2f33a0de659": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.044186724s
STEP: Saw pod success
Feb 18 22:10:56.262: INFO: Pod "client-containers-c1ef8160-6ddf-47b3-861b-f2f33a0de659" satisfied condition "success or failure"
Feb 18 22:10:56.264: INFO: Trying to get logs from node jerma-node pod client-containers-c1ef8160-6ddf-47b3-861b-f2f33a0de659 container test-container: 
STEP: delete the pod
Feb 18 22:10:56.301: INFO: Waiting for pod client-containers-c1ef8160-6ddf-47b3-861b-f2f33a0de659 to disappear
Feb 18 22:10:56.304: INFO: Pod client-containers-c1ef8160-6ddf-47b3-861b-f2f33a0de659 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:10:56.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8383" for this suite.

• [SLOW TEST:10.234 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2414,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:10:56.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Feb 18 22:10:56.397: INFO: >>> kubeConfig: /root/.kube/config
Feb 18 22:10:59.535: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:11:13.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7678" for this suite.

• [SLOW TEST:17.042 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":149,"skipped":2422,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:11:13.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 18 22:11:13.436: INFO: Waiting up to 5m0s for pod "pod-463ff139-2252-4d16-8753-6f034d2a2aa9" in namespace "emptydir-6884" to be "success or failure"
Feb 18 22:11:13.443: INFO: Pod "pod-463ff139-2252-4d16-8753-6f034d2a2aa9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.03909ms
Feb 18 22:11:15.448: INFO: Pod "pod-463ff139-2252-4d16-8753-6f034d2a2aa9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012587085s
Feb 18 22:11:17.454: INFO: Pod "pod-463ff139-2252-4d16-8753-6f034d2a2aa9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018452259s
Feb 18 22:11:19.462: INFO: Pod "pod-463ff139-2252-4d16-8753-6f034d2a2aa9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026174338s
Feb 18 22:11:21.957: INFO: Pod "pod-463ff139-2252-4d16-8753-6f034d2a2aa9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.52097144s
Feb 18 22:11:23.966: INFO: Pod "pod-463ff139-2252-4d16-8753-6f034d2a2aa9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.530560381s
STEP: Saw pod success
Feb 18 22:11:23.967: INFO: Pod "pod-463ff139-2252-4d16-8753-6f034d2a2aa9" satisfied condition "success or failure"
Feb 18 22:11:23.974: INFO: Trying to get logs from node jerma-node pod pod-463ff139-2252-4d16-8753-6f034d2a2aa9 container test-container: 
STEP: delete the pod
Feb 18 22:11:24.474: INFO: Waiting for pod pod-463ff139-2252-4d16-8753-6f034d2a2aa9 to disappear
Feb 18 22:11:24.480: INFO: Pod pod-463ff139-2252-4d16-8753-6f034d2a2aa9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:11:24.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6884" for this suite.

• [SLOW TEST:11.228 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2438,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:11:24.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0218 22:11:35.789793       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 18 22:11:35.790: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:11:35.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5674" for this suite.

• [SLOW TEST:11.234 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":151,"skipped":2465,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:11:35.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-06b6128e-6c65-49a7-afaf-e6658f4c4d91
STEP: Creating a pod to test consume configMaps
Feb 18 22:11:36.110: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-19508cdd-b7b6-44da-981f-abc333c132b7" in namespace "projected-1899" to be "success or failure"
Feb 18 22:11:36.153: INFO: Pod "pod-projected-configmaps-19508cdd-b7b6-44da-981f-abc333c132b7": Phase="Pending", Reason="", readiness=false. Elapsed: 42.995609ms
Feb 18 22:11:38.158: INFO: Pod "pod-projected-configmaps-19508cdd-b7b6-44da-981f-abc333c132b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048085377s
Feb 18 22:11:40.167: INFO: Pod "pod-projected-configmaps-19508cdd-b7b6-44da-981f-abc333c132b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057005302s
Feb 18 22:11:42.173: INFO: Pod "pod-projected-configmaps-19508cdd-b7b6-44da-981f-abc333c132b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062663247s
Feb 18 22:11:44.181: INFO: Pod "pod-projected-configmaps-19508cdd-b7b6-44da-981f-abc333c132b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070464664s
STEP: Saw pod success
Feb 18 22:11:44.181: INFO: Pod "pod-projected-configmaps-19508cdd-b7b6-44da-981f-abc333c132b7" satisfied condition "success or failure"
Feb 18 22:11:44.186: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-19508cdd-b7b6-44da-981f-abc333c132b7 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 18 22:11:44.222: INFO: Waiting for pod pod-projected-configmaps-19508cdd-b7b6-44da-981f-abc333c132b7 to disappear
Feb 18 22:11:44.235: INFO: Pod pod-projected-configmaps-19508cdd-b7b6-44da-981f-abc333c132b7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:11:44.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1899" for this suite.

• [SLOW TEST:8.424 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2511,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:11:44.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 22:11:45.160: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 18 22:11:47.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660705, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660705, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660705, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660705, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:11:49.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660705, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660705, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660705, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660705, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:11:51.183: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660705, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660705, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660705, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717660705, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 22:11:54.297: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 22:11:54.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:11:55.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6970" for this suite.
STEP: Destroying namespace "webhook-6970-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.177 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":153,"skipped":2519,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:11:55.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:12:55.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6467" for this suite.

• [SLOW TEST:60.143 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2557,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:12:55.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 18 22:12:55.728: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95b997cf-143b-4950-a672-bfd1425d2454" in namespace "projected-1487" to be "success or failure"
Feb 18 22:12:55.753: INFO: Pod "downwardapi-volume-95b997cf-143b-4950-a672-bfd1425d2454": Phase="Pending", Reason="", readiness=false. Elapsed: 24.375954ms
Feb 18 22:12:57.758: INFO: Pod "downwardapi-volume-95b997cf-143b-4950-a672-bfd1425d2454": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030232161s
Feb 18 22:12:59.766: INFO: Pod "downwardapi-volume-95b997cf-143b-4950-a672-bfd1425d2454": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037745789s
Feb 18 22:13:01.775: INFO: Pod "downwardapi-volume-95b997cf-143b-4950-a672-bfd1425d2454": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046894113s
Feb 18 22:13:03.794: INFO: Pod "downwardapi-volume-95b997cf-143b-4950-a672-bfd1425d2454": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065835628s
Feb 18 22:13:05.803: INFO: Pod "downwardapi-volume-95b997cf-143b-4950-a672-bfd1425d2454": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074778657s
STEP: Saw pod success
Feb 18 22:13:05.803: INFO: Pod "downwardapi-volume-95b997cf-143b-4950-a672-bfd1425d2454" satisfied condition "success or failure"
Feb 18 22:13:05.808: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-95b997cf-143b-4950-a672-bfd1425d2454 container client-container: 
STEP: delete the pod
Feb 18 22:13:05.870: INFO: Waiting for pod downwardapi-volume-95b997cf-143b-4950-a672-bfd1425d2454 to disappear
Feb 18 22:13:05.933: INFO: Pod downwardapi-volume-95b997cf-143b-4950-a672-bfd1425d2454 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:13:05.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1487" for this suite.

• [SLOW TEST:10.517 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2564,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:13:06.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 18 22:13:15.172: INFO: 9 pods remaining
Feb 18 22:13:15.172: INFO: 0 pods has nil DeletionTimestamp
Feb 18 22:13:15.172: INFO: 
Feb 18 22:13:15.733: INFO: 0 pods remaining
Feb 18 22:13:15.733: INFO: 0 pods has nil DeletionTimestamp
Feb 18 22:13:15.733: INFO: 
STEP: Gathering metrics
W0218 22:13:16.393669       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 18 22:13:16.393: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:13:16.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8432" for this suite.

• [SLOW TEST:10.531 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":156,"skipped":2584,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:13:16.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 22:13:18.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-4613
I0218 22:13:18.119594       8 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4613, replica count: 1
I0218 22:13:19.171252       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 0 pending, 1 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:13:20.173058       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:13:21.173656       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:13:22.174070       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:13:23.174517       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:13:24.175127       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:13:25.175731       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:13:26.176433       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:13:27.176912       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:13:28.177470       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:13:29.177944       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:13:30.178752       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:13:31.179313       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:13:32.179970       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:13:33.180810       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:13:34.181331       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:13:35.181934       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 18 22:13:36.516: INFO: Created: latency-svc-txbsx
Feb 18 22:13:36.540: INFO: Got endpoints: latency-svc-txbsx [1.257713816s]
Feb 18 22:13:36.685: INFO: Created: latency-svc-tmnrt
Feb 18 22:13:36.759: INFO: Got endpoints: latency-svc-tmnrt [217.461728ms]
Feb 18 22:13:36.767: INFO: Created: latency-svc-lwt22
Feb 18 22:13:36.875: INFO: Got endpoints: latency-svc-lwt22 [334.364365ms]
Feb 18 22:13:36.962: INFO: Created: latency-svc-dqtrl
Feb 18 22:13:37.040: INFO: Got endpoints: latency-svc-dqtrl [497.894026ms]
Feb 18 22:13:37.086: INFO: Created: latency-svc-b57fl
Feb 18 22:13:37.099: INFO: Got endpoints: latency-svc-b57fl [558.650641ms]
Feb 18 22:13:37.253: INFO: Created: latency-svc-vmsvq
Feb 18 22:13:37.272: INFO: Got endpoints: latency-svc-vmsvq [729.370938ms]
Feb 18 22:13:37.307: INFO: Created: latency-svc-tpj5t
Feb 18 22:13:37.337: INFO: Got endpoints: latency-svc-tpj5t [793.96102ms]
Feb 18 22:13:37.474: INFO: Created: latency-svc-4bzg8
Feb 18 22:13:37.486: INFO: Got endpoints: latency-svc-4bzg8 [945.982212ms]
Feb 18 22:13:37.507: INFO: Created: latency-svc-b8822
Feb 18 22:13:37.519: INFO: Got endpoints: latency-svc-b8822 [976.362978ms]
Feb 18 22:13:37.644: INFO: Created: latency-svc-wzhbq
Feb 18 22:13:37.647: INFO: Got endpoints: latency-svc-wzhbq [1.10482652s]
Feb 18 22:13:37.682: INFO: Created: latency-svc-nsp8n
Feb 18 22:13:37.695: INFO: Got endpoints: latency-svc-nsp8n [1.152039654s]
Feb 18 22:13:37.712: INFO: Created: latency-svc-xfql6
Feb 18 22:13:37.834: INFO: Got endpoints: latency-svc-xfql6 [1.291767648s]
Feb 18 22:13:37.838: INFO: Created: latency-svc-d2mnn
Feb 18 22:13:37.842: INFO: Got endpoints: latency-svc-d2mnn [1.300372814s]
Feb 18 22:13:37.935: INFO: Created: latency-svc-rnl7m
Feb 18 22:13:38.038: INFO: Got endpoints: latency-svc-rnl7m [1.497945021s]
Feb 18 22:13:38.078: INFO: Created: latency-svc-578cq
Feb 18 22:13:38.096: INFO: Got endpoints: latency-svc-578cq [1.549743611s]
Feb 18 22:13:38.101: INFO: Created: latency-svc-cn8ww
Feb 18 22:13:38.107: INFO: Got endpoints: latency-svc-cn8ww [1.559094166s]
Feb 18 22:13:38.138: INFO: Created: latency-svc-vfdcj
Feb 18 22:13:38.211: INFO: Got endpoints: latency-svc-vfdcj [1.451088248s]
Feb 18 22:13:38.238: INFO: Created: latency-svc-kn4h2
Feb 18 22:13:38.238: INFO: Got endpoints: latency-svc-kn4h2 [1.36195886s]
Feb 18 22:13:38.263: INFO: Created: latency-svc-52z9r
Feb 18 22:13:38.265: INFO: Got endpoints: latency-svc-52z9r [1.224061964s]
Feb 18 22:13:38.294: INFO: Created: latency-svc-vdckd
Feb 18 22:13:38.359: INFO: Got endpoints: latency-svc-vdckd [1.259285182s]
Feb 18 22:13:38.362: INFO: Created: latency-svc-hrhkz
Feb 18 22:13:38.370: INFO: Got endpoints: latency-svc-hrhkz [1.098045333s]
Feb 18 22:13:38.409: INFO: Created: latency-svc-k2n7x
Feb 18 22:13:38.440: INFO: Got endpoints: latency-svc-k2n7x [1.102677685s]
Feb 18 22:13:38.441: INFO: Created: latency-svc-ndkst
Feb 18 22:13:38.453: INFO: Got endpoints: latency-svc-ndkst [966.600725ms]
Feb 18 22:13:38.527: INFO: Created: latency-svc-bk6gs
Feb 18 22:13:38.533: INFO: Got endpoints: latency-svc-bk6gs [1.013963064s]
Feb 18 22:13:38.562: INFO: Created: latency-svc-xnjms
Feb 18 22:13:38.567: INFO: Got endpoints: latency-svc-xnjms [918.693378ms]
Feb 18 22:13:38.595: INFO: Created: latency-svc-7bkgh
Feb 18 22:13:38.601: INFO: Got endpoints: latency-svc-7bkgh [905.874954ms]
Feb 18 22:13:38.691: INFO: Created: latency-svc-wjdcz
Feb 18 22:13:38.691: INFO: Got endpoints: latency-svc-wjdcz [856.604661ms]
Feb 18 22:13:38.715: INFO: Created: latency-svc-48gv8
Feb 18 22:13:38.758: INFO: Got endpoints: latency-svc-48gv8 [916.095833ms]
Feb 18 22:13:38.767: INFO: Created: latency-svc-bt54d
Feb 18 22:13:38.849: INFO: Got endpoints: latency-svc-bt54d [809.98322ms]
Feb 18 22:13:38.880: INFO: Created: latency-svc-88dws
Feb 18 22:13:38.900: INFO: Got endpoints: latency-svc-88dws [803.481024ms]
Feb 18 22:13:38.901: INFO: Created: latency-svc-c52mg
Feb 18 22:13:38.919: INFO: Got endpoints: latency-svc-c52mg [811.075239ms]
Feb 18 22:13:38.953: INFO: Created: latency-svc-gz4nz
Feb 18 22:13:39.017: INFO: Got endpoints: latency-svc-gz4nz [806.353676ms]
Feb 18 22:13:39.043: INFO: Created: latency-svc-8858h
Feb 18 22:13:39.052: INFO: Got endpoints: latency-svc-8858h [814.268862ms]
Feb 18 22:13:39.076: INFO: Created: latency-svc-l28bc
Feb 18 22:13:39.292: INFO: Got endpoints: latency-svc-l28bc [1.02695873s]
Feb 18 22:13:39.294: INFO: Created: latency-svc-7njdz
Feb 18 22:13:39.304: INFO: Got endpoints: latency-svc-7njdz [944.229149ms]
Feb 18 22:13:39.333: INFO: Created: latency-svc-k4twt
Feb 18 22:13:39.344: INFO: Got endpoints: latency-svc-k4twt [973.304699ms]
Feb 18 22:13:39.504: INFO: Created: latency-svc-bl2vh
Feb 18 22:13:39.535: INFO: Got endpoints: latency-svc-bl2vh [1.094479754s]
Feb 18 22:13:39.559: INFO: Created: latency-svc-8cvp9
Feb 18 22:13:39.585: INFO: Got endpoints: latency-svc-8cvp9 [1.13188772s]
Feb 18 22:13:39.683: INFO: Created: latency-svc-qpc95
Feb 18 22:13:39.689: INFO: Got endpoints: latency-svc-qpc95 [154.424826ms]
Feb 18 22:13:39.718: INFO: Created: latency-svc-b9sdf
Feb 18 22:13:39.733: INFO: Got endpoints: latency-svc-b9sdf [1.199872445s]
Feb 18 22:13:39.765: INFO: Created: latency-svc-nz94t
Feb 18 22:13:39.769: INFO: Got endpoints: latency-svc-nz94t [1.202355475s]
Feb 18 22:13:39.836: INFO: Created: latency-svc-272gb
Feb 18 22:13:39.844: INFO: Got endpoints: latency-svc-272gb [1.242874393s]
Feb 18 22:13:39.872: INFO: Created: latency-svc-h7hvf
Feb 18 22:13:39.875: INFO: Got endpoints: latency-svc-h7hvf [1.184289083s]
Feb 18 22:13:39.914: INFO: Created: latency-svc-8nrvz
Feb 18 22:13:39.929: INFO: Got endpoints: latency-svc-8nrvz [1.170114492s]
Feb 18 22:13:40.000: INFO: Created: latency-svc-8qk75
Feb 18 22:13:40.009: INFO: Got endpoints: latency-svc-8qk75 [1.159644058s]
Feb 18 22:13:40.035: INFO: Created: latency-svc-km7nf
Feb 18 22:13:40.039: INFO: Got endpoints: latency-svc-km7nf [1.138427343s]
Feb 18 22:13:40.078: INFO: Created: latency-svc-qddlt
Feb 18 22:13:40.079: INFO: Got endpoints: latency-svc-qddlt [1.160393424s]
Feb 18 22:13:40.099: INFO: Created: latency-svc-jndp9
Feb 18 22:13:40.155: INFO: Got endpoints: latency-svc-jndp9 [1.137409329s]
Feb 18 22:13:40.160: INFO: Created: latency-svc-s8cqc
Feb 18 22:13:40.163: INFO: Got endpoints: latency-svc-s8cqc [1.111000245s]
Feb 18 22:13:40.200: INFO: Created: latency-svc-ghrhw
Feb 18 22:13:40.202: INFO: Got endpoints: latency-svc-ghrhw [909.838562ms]
Feb 18 22:13:40.223: INFO: Created: latency-svc-2crfk
Feb 18 22:13:40.245: INFO: Got endpoints: latency-svc-2crfk [940.949347ms]
Feb 18 22:13:40.254: INFO: Created: latency-svc-2jt6t
Feb 18 22:13:40.315: INFO: Got endpoints: latency-svc-2jt6t [970.917301ms]
Feb 18 22:13:40.340: INFO: Created: latency-svc-bkhq9
Feb 18 22:13:40.341: INFO: Got endpoints: latency-svc-bkhq9 [755.184567ms]
Feb 18 22:13:40.358: INFO: Created: latency-svc-xzmlh
Feb 18 22:13:40.367: INFO: Got endpoints: latency-svc-xzmlh [677.228207ms]
Feb 18 22:13:40.385: INFO: Created: latency-svc-cphrg
Feb 18 22:13:40.388: INFO: Got endpoints: latency-svc-cphrg [655.317152ms]
Feb 18 22:13:40.412: INFO: Created: latency-svc-fxt9c
Feb 18 22:13:40.538: INFO: Got endpoints: latency-svc-fxt9c [769.046241ms]
Feb 18 22:13:40.585: INFO: Created: latency-svc-n6f74
Feb 18 22:13:40.585: INFO: Created: latency-svc-gftp9
Feb 18 22:13:40.590: INFO: Got endpoints: latency-svc-gftp9 [715.215548ms]
Feb 18 22:13:40.592: INFO: Got endpoints: latency-svc-n6f74 [747.80808ms]
Feb 18 22:13:40.621: INFO: Created: latency-svc-rjvc6
Feb 18 22:13:40.625: INFO: Got endpoints: latency-svc-rjvc6 [696.186027ms]
Feb 18 22:13:40.806: INFO: Created: latency-svc-msnxz
Feb 18 22:13:40.830: INFO: Got endpoints: latency-svc-msnxz [821.204605ms]
Feb 18 22:13:40.873: INFO: Created: latency-svc-bgmlf
Feb 18 22:13:40.889: INFO: Got endpoints: latency-svc-bgmlf [850.787505ms]
Feb 18 22:13:41.026: INFO: Created: latency-svc-zjb29
Feb 18 22:13:41.026: INFO: Got endpoints: latency-svc-zjb29 [947.390881ms]
Feb 18 22:13:41.087: INFO: Created: latency-svc-jmjrf
Feb 18 22:13:41.117: INFO: Got endpoints: latency-svc-jmjrf [961.934969ms]
Feb 18 22:13:41.123: INFO: Created: latency-svc-6d66l
Feb 18 22:13:41.170: INFO: Got endpoints: latency-svc-6d66l [1.006777631s]
Feb 18 22:13:41.183: INFO: Created: latency-svc-sn6kl
Feb 18 22:13:41.207: INFO: Got endpoints: latency-svc-sn6kl [1.004754793s]
Feb 18 22:13:41.208: INFO: Created: latency-svc-nwwqp
Feb 18 22:13:41.226: INFO: Got endpoints: latency-svc-nwwqp [980.79813ms]
Feb 18 22:13:41.231: INFO: Created: latency-svc-bkv45
Feb 18 22:13:41.233: INFO: Got endpoints: latency-svc-bkv45 [918.045288ms]
Feb 18 22:13:41.260: INFO: Created: latency-svc-p7p7g
Feb 18 22:13:41.262: INFO: Got endpoints: latency-svc-p7p7g [921.337784ms]
Feb 18 22:13:41.444: INFO: Created: latency-svc-5zdw6
Feb 18 22:13:41.452: INFO: Got endpoints: latency-svc-5zdw6 [1.085133877s]
Feb 18 22:13:41.477: INFO: Created: latency-svc-jxghx
Feb 18 22:13:41.488: INFO: Got endpoints: latency-svc-jxghx [1.099599035s]
Feb 18 22:13:41.534: INFO: Created: latency-svc-5gsc9
Feb 18 22:13:41.593: INFO: Got endpoints: latency-svc-5gsc9 [1.054197098s]
Feb 18 22:13:41.595: INFO: Created: latency-svc-7grqz
Feb 18 22:13:41.606: INFO: Got endpoints: latency-svc-7grqz [1.015755898s]
Feb 18 22:13:41.637: INFO: Created: latency-svc-bt7c8
Feb 18 22:13:41.658: INFO: Got endpoints: latency-svc-bt7c8 [1.065858281s]
Feb 18 22:13:41.718: INFO: Created: latency-svc-h5pm4
Feb 18 22:13:41.727: INFO: Got endpoints: latency-svc-h5pm4 [1.102109598s]
Feb 18 22:13:41.748: INFO: Created: latency-svc-v92db
Feb 18 22:13:41.772: INFO: Got endpoints: latency-svc-v92db [942.120233ms]
Feb 18 22:13:41.800: INFO: Created: latency-svc-pw2g9
Feb 18 22:13:41.805: INFO: Got endpoints: latency-svc-pw2g9 [914.994269ms]
Feb 18 22:13:41.952: INFO: Created: latency-svc-7xtq2
Feb 18 22:13:41.953: INFO: Got endpoints: latency-svc-7xtq2 [926.596421ms]
Feb 18 22:13:42.008: INFO: Created: latency-svc-rsx8c
Feb 18 22:13:42.028: INFO: Got endpoints: latency-svc-rsx8c [911.021979ms]
Feb 18 22:13:42.098: INFO: Created: latency-svc-wnpk4
Feb 18 22:13:42.125: INFO: Got endpoints: latency-svc-wnpk4 [954.954767ms]
Feb 18 22:13:42.135: INFO: Created: latency-svc-g9stv
Feb 18 22:13:42.164: INFO: Got endpoints: latency-svc-g9stv [957.192135ms]
Feb 18 22:13:42.172: INFO: Created: latency-svc-sq4dh
Feb 18 22:13:42.191: INFO: Got endpoints: latency-svc-sq4dh [965.39634ms]
Feb 18 22:13:42.246: INFO: Created: latency-svc-lnsbg
Feb 18 22:13:42.257: INFO: Got endpoints: latency-svc-lnsbg [1.023851792s]
Feb 18 22:13:42.288: INFO: Created: latency-svc-vmg89
Feb 18 22:13:42.289: INFO: Got endpoints: latency-svc-vmg89 [1.026553243s]
Feb 18 22:13:42.314: INFO: Created: latency-svc-mwgbd
Feb 18 22:13:42.387: INFO: Got endpoints: latency-svc-mwgbd [934.598671ms]
Feb 18 22:13:42.395: INFO: Created: latency-svc-w78hh
Feb 18 22:13:42.400: INFO: Got endpoints: latency-svc-w78hh [911.761638ms]
Feb 18 22:13:42.427: INFO: Created: latency-svc-rxg6b
Feb 18 22:13:42.437: INFO: Got endpoints: latency-svc-rxg6b [843.57443ms]
Feb 18 22:13:42.462: INFO: Created: latency-svc-hd465
Feb 18 22:13:42.462: INFO: Got endpoints: latency-svc-hd465 [855.502849ms]
Feb 18 22:13:42.525: INFO: Created: latency-svc-gb6nt
Feb 18 22:13:42.527: INFO: Got endpoints: latency-svc-gb6nt [868.757974ms]
Feb 18 22:13:42.579: INFO: Created: latency-svc-sq5jv
Feb 18 22:13:42.587: INFO: Got endpoints: latency-svc-sq5jv [859.686909ms]
Feb 18 22:13:42.610: INFO: Created: latency-svc-bb8dr
Feb 18 22:13:42.698: INFO: Got endpoints: latency-svc-bb8dr [925.554565ms]
Feb 18 22:13:42.698: INFO: Created: latency-svc-6bxd9
Feb 18 22:13:42.716: INFO: Got endpoints: latency-svc-6bxd9 [911.216875ms]
Feb 18 22:13:42.718: INFO: Created: latency-svc-gqjbg
Feb 18 22:13:42.722: INFO: Got endpoints: latency-svc-gqjbg [769.233895ms]
Feb 18 22:13:42.740: INFO: Created: latency-svc-d5tfj
Feb 18 22:13:42.744: INFO: Got endpoints: latency-svc-d5tfj [715.798667ms]
Feb 18 22:13:42.837: INFO: Created: latency-svc-wqczk
Feb 18 22:13:42.860: INFO: Got endpoints: latency-svc-wqczk [734.476532ms]
Feb 18 22:13:42.868: INFO: Created: latency-svc-85c9t
Feb 18 22:13:42.869: INFO: Got endpoints: latency-svc-85c9t [704.915911ms]
Feb 18 22:13:42.901: INFO: Created: latency-svc-wskqm
Feb 18 22:13:42.905: INFO: Got endpoints: latency-svc-wskqm [713.394721ms]
Feb 18 22:13:43.007: INFO: Created: latency-svc-tngvc
Feb 18 22:13:43.033: INFO: Got endpoints: latency-svc-tngvc [776.119139ms]
Feb 18 22:13:43.041: INFO: Created: latency-svc-bps6z
Feb 18 22:13:43.043: INFO: Got endpoints: latency-svc-bps6z [754.474031ms]
Feb 18 22:13:43.077: INFO: Created: latency-svc-rpsrn
Feb 18 22:13:43.080: INFO: Got endpoints: latency-svc-rpsrn [693.256525ms]
Feb 18 22:13:43.101: INFO: Created: latency-svc-2fdtn
Feb 18 22:13:43.135: INFO: Got endpoints: latency-svc-2fdtn [734.700557ms]
Feb 18 22:13:43.150: INFO: Created: latency-svc-8tf4l
Feb 18 22:13:43.153: INFO: Got endpoints: latency-svc-8tf4l [716.48153ms]
Feb 18 22:13:43.182: INFO: Created: latency-svc-glks8
Feb 18 22:13:43.192: INFO: Got endpoints: latency-svc-glks8 [730.116903ms]
Feb 18 22:13:43.226: INFO: Created: latency-svc-g59l7
Feb 18 22:13:43.230: INFO: Got endpoints: latency-svc-g59l7 [703.107331ms]
Feb 18 22:13:43.293: INFO: Created: latency-svc-mvlf6
Feb 18 22:13:43.294: INFO: Got endpoints: latency-svc-mvlf6 [706.855064ms]
Feb 18 22:13:43.347: INFO: Created: latency-svc-fqh6k
Feb 18 22:13:43.362: INFO: Got endpoints: latency-svc-fqh6k [663.823739ms]
Feb 18 22:13:43.365: INFO: Created: latency-svc-7fz99
Feb 18 22:13:43.463: INFO: Got endpoints: latency-svc-7fz99 [746.813707ms]
Feb 18 22:13:43.472: INFO: Created: latency-svc-24cn9
Feb 18 22:13:43.473: INFO: Got endpoints: latency-svc-24cn9 [749.968639ms]
Feb 18 22:13:43.510: INFO: Created: latency-svc-26ln6
Feb 18 22:13:43.515: INFO: Got endpoints: latency-svc-26ln6 [770.957562ms]
Feb 18 22:13:43.541: INFO: Created: latency-svc-hz9vb
Feb 18 22:13:43.542: INFO: Got endpoints: latency-svc-hz9vb [681.419644ms]
Feb 18 22:13:43.571: INFO: Created: latency-svc-62fnq
Feb 18 22:13:43.642: INFO: Got endpoints: latency-svc-62fnq [772.898777ms]
Feb 18 22:13:43.681: INFO: Created: latency-svc-knlpf
Feb 18 22:13:43.694: INFO: Got endpoints: latency-svc-knlpf [788.904597ms]
Feb 18 22:13:43.733: INFO: Created: latency-svc-szx4b
Feb 18 22:13:43.733: INFO: Got endpoints: latency-svc-szx4b [699.847504ms]
Feb 18 22:13:43.775: INFO: Created: latency-svc-qbgnc
Feb 18 22:13:43.784: INFO: Got endpoints: latency-svc-qbgnc [740.776185ms]
Feb 18 22:13:43.807: INFO: Created: latency-svc-p9gsw
Feb 18 22:13:43.813: INFO: Got endpoints: latency-svc-p9gsw [732.436068ms]
Feb 18 22:13:43.842: INFO: Created: latency-svc-jfcms
Feb 18 22:13:43.851: INFO: Got endpoints: latency-svc-jfcms [716.29532ms]
Feb 18 22:13:43.942: INFO: Created: latency-svc-bxml5
Feb 18 22:13:43.953: INFO: Got endpoints: latency-svc-bxml5 [799.513278ms]
Feb 18 22:13:43.974: INFO: Created: latency-svc-wddv8
Feb 18 22:13:43.975: INFO: Got endpoints: latency-svc-wddv8 [782.785366ms]
Feb 18 22:13:44.002: INFO: Created: latency-svc-shnjb
Feb 18 22:13:44.005: INFO: Got endpoints: latency-svc-shnjb [775.454317ms]
Feb 18 22:13:44.092: INFO: Created: latency-svc-cmjd5
Feb 18 22:13:44.102: INFO: Got endpoints: latency-svc-cmjd5 [807.452368ms]
Feb 18 22:13:44.113: INFO: Created: latency-svc-m946f
Feb 18 22:13:44.115: INFO: Got endpoints: latency-svc-m946f [753.47297ms]
Feb 18 22:13:44.144: INFO: Created: latency-svc-jjszt
Feb 18 22:13:44.150: INFO: Got endpoints: latency-svc-jjszt [686.556117ms]
Feb 18 22:13:44.171: INFO: Created: latency-svc-cm2dp
Feb 18 22:13:44.225: INFO: Got endpoints: latency-svc-cm2dp [751.925709ms]
Feb 18 22:13:44.240: INFO: Created: latency-svc-69gmd
Feb 18 22:13:44.247: INFO: Got endpoints: latency-svc-69gmd [731.466429ms]
Feb 18 22:13:44.272: INFO: Created: latency-svc-cjqln
Feb 18 22:13:44.276: INFO: Got endpoints: latency-svc-cjqln [733.521331ms]
Feb 18 22:13:44.292: INFO: Created: latency-svc-rt2hs
Feb 18 22:13:44.315: INFO: Got endpoints: latency-svc-rt2hs [671.83191ms]
Feb 18 22:13:44.357: INFO: Created: latency-svc-wc5d2
Feb 18 22:13:44.359: INFO: Got endpoints: latency-svc-wc5d2 [664.616635ms]
Feb 18 22:13:44.391: INFO: Created: latency-svc-6684c
Feb 18 22:13:44.414: INFO: Created: latency-svc-8hhq4
Feb 18 22:13:44.415: INFO: Got endpoints: latency-svc-6684c [681.987078ms]
Feb 18 22:13:44.436: INFO: Got endpoints: latency-svc-8hhq4 [651.608872ms]
Feb 18 22:13:44.457: INFO: Created: latency-svc-gfz8b
Feb 18 22:13:44.528: INFO: Got endpoints: latency-svc-gfz8b [715.223381ms]
Feb 18 22:13:44.542: INFO: Created: latency-svc-rknfs
Feb 18 22:13:44.546: INFO: Got endpoints: latency-svc-rknfs [694.497597ms]
Feb 18 22:13:44.591: INFO: Created: latency-svc-cnbs2
Feb 18 22:13:44.598: INFO: Got endpoints: latency-svc-cnbs2 [644.606977ms]
Feb 18 22:13:44.669: INFO: Created: latency-svc-57vlf
Feb 18 22:13:44.706: INFO: Got endpoints: latency-svc-57vlf [731.020579ms]
Feb 18 22:13:44.711: INFO: Created: latency-svc-9pnmp
Feb 18 22:13:44.720: INFO: Got endpoints: latency-svc-9pnmp [714.461787ms]
Feb 18 22:13:44.744: INFO: Created: latency-svc-qk455
Feb 18 22:13:44.749: INFO: Got endpoints: latency-svc-qk455 [646.697501ms]
Feb 18 22:13:44.825: INFO: Created: latency-svc-wcr9n
Feb 18 22:13:44.827: INFO: Got endpoints: latency-svc-wcr9n [711.174958ms]
Feb 18 22:13:44.872: INFO: Created: latency-svc-mjf72
Feb 18 22:13:44.877: INFO: Got endpoints: latency-svc-mjf72 [726.791725ms]
Feb 18 22:13:44.899: INFO: Created: latency-svc-mz48s
Feb 18 22:13:44.904: INFO: Got endpoints: latency-svc-mz48s [679.871204ms]
Feb 18 22:13:44.964: INFO: Created: latency-svc-mf75w
Feb 18 22:13:44.972: INFO: Got endpoints: latency-svc-mf75w [724.959764ms]
Feb 18 22:13:44.976: INFO: Created: latency-svc-r2jcg
Feb 18 22:13:44.980: INFO: Got endpoints: latency-svc-r2jcg [703.547918ms]
Feb 18 22:13:45.009: INFO: Created: latency-svc-vrhnb
Feb 18 22:13:45.037: INFO: Got endpoints: latency-svc-vrhnb [722.304916ms]
Feb 18 22:13:45.054: INFO: Created: latency-svc-xnl8s
Feb 18 22:13:45.086: INFO: Got endpoints: latency-svc-xnl8s [727.455546ms]
Feb 18 22:13:45.097: INFO: Created: latency-svc-6nzwq
Feb 18 22:13:45.104: INFO: Got endpoints: latency-svc-6nzwq [689.39784ms]
Feb 18 22:13:45.126: INFO: Created: latency-svc-d8p9c
Feb 18 22:13:45.144: INFO: Got endpoints: latency-svc-d8p9c [708.198208ms]
Feb 18 22:13:45.156: INFO: Created: latency-svc-p2mkk
Feb 18 22:13:45.174: INFO: Created: latency-svc-vxx9f
Feb 18 22:13:45.175: INFO: Got endpoints: latency-svc-p2mkk [646.236374ms]
Feb 18 22:13:45.179: INFO: Got endpoints: latency-svc-vxx9f [632.335463ms]
Feb 18 22:13:45.239: INFO: Created: latency-svc-wfr2l
Feb 18 22:13:45.252: INFO: Got endpoints: latency-svc-wfr2l [653.832708ms]
Feb 18 22:13:45.266: INFO: Created: latency-svc-ttg4v
Feb 18 22:13:45.271: INFO: Got endpoints: latency-svc-ttg4v [564.195309ms]
Feb 18 22:13:45.289: INFO: Created: latency-svc-q6lnm
Feb 18 22:13:45.301: INFO: Got endpoints: latency-svc-q6lnm [581.11897ms]
Feb 18 22:13:45.324: INFO: Created: latency-svc-rdrhb
Feb 18 22:13:45.387: INFO: Got endpoints: latency-svc-rdrhb [637.898209ms]
Feb 18 22:13:45.409: INFO: Created: latency-svc-mgq5h
Feb 18 22:13:45.415: INFO: Got endpoints: latency-svc-mgq5h [588.103872ms]
Feb 18 22:13:45.485: INFO: Created: latency-svc-msxvg
Feb 18 22:13:45.564: INFO: Created: latency-svc-9pnsl
Feb 18 22:13:45.571: INFO: Got endpoints: latency-svc-msxvg [693.760078ms]
Feb 18 22:13:45.571: INFO: Got endpoints: latency-svc-9pnsl [666.705477ms]
Feb 18 22:13:45.614: INFO: Created: latency-svc-z5t4m
Feb 18 22:13:45.626: INFO: Got endpoints: latency-svc-z5t4m [653.403292ms]
Feb 18 22:13:45.647: INFO: Created: latency-svc-ztp7q
Feb 18 22:13:45.651: INFO: Got endpoints: latency-svc-ztp7q [671.642601ms]
Feb 18 22:13:45.693: INFO: Created: latency-svc-f2szj
Feb 18 22:13:45.705: INFO: Got endpoints: latency-svc-f2szj [667.795756ms]
Feb 18 22:13:45.726: INFO: Created: latency-svc-6f7tv
Feb 18 22:13:45.732: INFO: Got endpoints: latency-svc-6f7tv [645.520813ms]
Feb 18 22:13:45.761: INFO: Created: latency-svc-tsbtn
Feb 18 22:13:45.763: INFO: Got endpoints: latency-svc-tsbtn [658.936856ms]
Feb 18 22:13:45.829: INFO: Created: latency-svc-nvhwj
Feb 18 22:13:45.839: INFO: Got endpoints: latency-svc-nvhwj [694.501675ms]
Feb 18 22:13:45.862: INFO: Created: latency-svc-w2z8v
Feb 18 22:13:45.875: INFO: Got endpoints: latency-svc-w2z8v [700.341562ms]
Feb 18 22:13:45.910: INFO: Created: latency-svc-lvwcq
Feb 18 22:13:45.920: INFO: Got endpoints: latency-svc-lvwcq [741.363169ms]
Feb 18 22:13:45.987: INFO: Created: latency-svc-nlrxg
Feb 18 22:13:45.987: INFO: Got endpoints: latency-svc-nlrxg [735.025568ms]
Feb 18 22:13:46.003: INFO: Created: latency-svc-g2ngc
Feb 18 22:13:46.010: INFO: Got endpoints: latency-svc-g2ngc [739.477711ms]
Feb 18 22:13:46.090: INFO: Created: latency-svc-bvk65
Feb 18 22:13:46.094: INFO: Got endpoints: latency-svc-bvk65 [793.057785ms]
Feb 18 22:13:46.123: INFO: Created: latency-svc-55xft
Feb 18 22:13:46.123: INFO: Got endpoints: latency-svc-55xft [736.122979ms]
Feb 18 22:13:46.143: INFO: Created: latency-svc-xkbwx
Feb 18 22:13:46.143: INFO: Got endpoints: latency-svc-xkbwx [728.330911ms]
Feb 18 22:13:46.173: INFO: Created: latency-svc-ps2dp
Feb 18 22:13:46.178: INFO: Got endpoints: latency-svc-ps2dp [606.916276ms]
Feb 18 22:13:46.228: INFO: Created: latency-svc-xjqs5
Feb 18 22:13:46.238: INFO: Got endpoints: latency-svc-xjqs5 [666.22678ms]
Feb 18 22:13:46.259: INFO: Created: latency-svc-5ng4s
Feb 18 22:13:46.278: INFO: Got endpoints: latency-svc-5ng4s [652.596456ms]
Feb 18 22:13:46.289: INFO: Created: latency-svc-n74vp
Feb 18 22:13:46.298: INFO: Got endpoints: latency-svc-n74vp [646.150734ms]
Feb 18 22:13:46.402: INFO: Created: latency-svc-zdm9x
Feb 18 22:13:46.455: INFO: Got endpoints: latency-svc-zdm9x [749.505911ms]
Feb 18 22:13:46.465: INFO: Created: latency-svc-pnn6l
Feb 18 22:13:46.465: INFO: Got endpoints: latency-svc-pnn6l [732.920964ms]
Feb 18 22:13:46.544: INFO: Created: latency-svc-28kwt
Feb 18 22:13:46.581: INFO: Created: latency-svc-5qlk5
Feb 18 22:13:46.582: INFO: Got endpoints: latency-svc-28kwt [818.362756ms]
Feb 18 22:13:46.606: INFO: Got endpoints: latency-svc-5qlk5 [766.887538ms]
Feb 18 22:13:46.609: INFO: Created: latency-svc-7ddqj
Feb 18 22:13:46.620: INFO: Got endpoints: latency-svc-7ddqj [745.147446ms]
Feb 18 22:13:46.703: INFO: Created: latency-svc-zwskk
Feb 18 22:13:46.703: INFO: Got endpoints: latency-svc-zwskk [782.86392ms]
Feb 18 22:13:46.735: INFO: Created: latency-svc-zm2b4
Feb 18 22:13:46.740: INFO: Got endpoints: latency-svc-zm2b4 [752.197057ms]
Feb 18 22:13:46.771: INFO: Created: latency-svc-78999
Feb 18 22:13:46.880: INFO: Got endpoints: latency-svc-78999 [869.57339ms]
Feb 18 22:13:46.890: INFO: Created: latency-svc-vwstk
Feb 18 22:13:46.950: INFO: Created: latency-svc-z5chk
Feb 18 22:13:46.953: INFO: Got endpoints: latency-svc-vwstk [858.908593ms]
Feb 18 22:13:47.037: INFO: Got endpoints: latency-svc-z5chk [913.95073ms]
Feb 18 22:13:47.039: INFO: Created: latency-svc-9wmrw
Feb 18 22:13:47.051: INFO: Got endpoints: latency-svc-9wmrw [907.740423ms]
Feb 18 22:13:47.081: INFO: Created: latency-svc-qslpg
Feb 18 22:13:47.082: INFO: Got endpoints: latency-svc-qslpg [904.198393ms]
Feb 18 22:13:47.104: INFO: Created: latency-svc-lt9qv
Feb 18 22:13:47.112: INFO: Got endpoints: latency-svc-lt9qv [874.122016ms]
Feb 18 22:13:47.137: INFO: Created: latency-svc-dnbfl
Feb 18 22:13:47.220: INFO: Got endpoints: latency-svc-dnbfl [941.280433ms]
Feb 18 22:13:47.279: INFO: Created: latency-svc-rr8c5
Feb 18 22:13:47.300: INFO: Got endpoints: latency-svc-rr8c5 [1.002158569s]
Feb 18 22:13:47.432: INFO: Created: latency-svc-vzknj
Feb 18 22:13:47.434: INFO: Created: latency-svc-2wwcb
Feb 18 22:13:47.459: INFO: Created: latency-svc-sgq85
Feb 18 22:13:47.460: INFO: Got endpoints: latency-svc-2wwcb [995.206541ms]
Feb 18 22:13:47.470: INFO: Got endpoints: latency-svc-vzknj [1.015465027s]
Feb 18 22:13:47.471: INFO: Got endpoints: latency-svc-sgq85 [888.651045ms]
Feb 18 22:13:47.573: INFO: Created: latency-svc-cnkxv
Feb 18 22:13:47.590: INFO: Got endpoints: latency-svc-cnkxv [984.345698ms]
Feb 18 22:13:47.596: INFO: Created: latency-svc-jdvt7
Feb 18 22:13:47.627: INFO: Got endpoints: latency-svc-jdvt7 [1.006284814s]
Feb 18 22:13:47.716: INFO: Created: latency-svc-m7d9w
Feb 18 22:13:47.743: INFO: Got endpoints: latency-svc-m7d9w [1.04050437s]
Feb 18 22:13:47.748: INFO: Created: latency-svc-nhvx2
Feb 18 22:13:47.751: INFO: Got endpoints: latency-svc-nhvx2 [1.010890379s]
Feb 18 22:13:47.779: INFO: Created: latency-svc-snj5l
Feb 18 22:13:47.800: INFO: Got endpoints: latency-svc-snj5l [919.593991ms]
Feb 18 22:13:47.867: INFO: Created: latency-svc-7m5vx
Feb 18 22:13:47.941: INFO: Got endpoints: latency-svc-7m5vx [987.32092ms]
Feb 18 22:13:47.950: INFO: Created: latency-svc-clzdq
Feb 18 22:13:47.951: INFO: Got endpoints: latency-svc-clzdq [913.224159ms]
Feb 18 22:13:48.058: INFO: Created: latency-svc-26lkk
Feb 18 22:13:48.073: INFO: Got endpoints: latency-svc-26lkk [1.021233794s]
Feb 18 22:13:48.094: INFO: Created: latency-svc-4bvjf
Feb 18 22:13:48.101: INFO: Got endpoints: latency-svc-4bvjf [1.018851828s]
Feb 18 22:13:48.142: INFO: Created: latency-svc-bbghf
Feb 18 22:13:48.194: INFO: Got endpoints: latency-svc-bbghf [1.082117503s]
Feb 18 22:13:48.203: INFO: Created: latency-svc-prdnt
Feb 18 22:13:48.218: INFO: Got endpoints: latency-svc-prdnt [998.432833ms]
Feb 18 22:13:48.219: INFO: Created: latency-svc-fng48
Feb 18 22:13:48.254: INFO: Got endpoints: latency-svc-fng48 [954.369475ms]
Feb 18 22:13:48.254: INFO: Created: latency-svc-s7lbz
Feb 18 22:13:48.261: INFO: Got endpoints: latency-svc-s7lbz [800.705729ms]
Feb 18 22:13:48.277: INFO: Created: latency-svc-4fxvf
Feb 18 22:13:48.280: INFO: Got endpoints: latency-svc-4fxvf [809.508771ms]
Feb 18 22:13:48.280: INFO: Latencies: [154.424826ms 217.461728ms 334.364365ms 497.894026ms 558.650641ms 564.195309ms 581.11897ms 588.103872ms 606.916276ms 632.335463ms 637.898209ms 644.606977ms 645.520813ms 646.150734ms 646.236374ms 646.697501ms 651.608872ms 652.596456ms 653.403292ms 653.832708ms 655.317152ms 658.936856ms 663.823739ms 664.616635ms 666.22678ms 666.705477ms 667.795756ms 671.642601ms 671.83191ms 677.228207ms 679.871204ms 681.419644ms 681.987078ms 686.556117ms 689.39784ms 693.256525ms 693.760078ms 694.497597ms 694.501675ms 696.186027ms 699.847504ms 700.341562ms 703.107331ms 703.547918ms 704.915911ms 706.855064ms 708.198208ms 711.174958ms 713.394721ms 714.461787ms 715.215548ms 715.223381ms 715.798667ms 716.29532ms 716.48153ms 722.304916ms 724.959764ms 726.791725ms 727.455546ms 728.330911ms 729.370938ms 730.116903ms 731.020579ms 731.466429ms 732.436068ms 732.920964ms 733.521331ms 734.476532ms 734.700557ms 735.025568ms 736.122979ms 739.477711ms 740.776185ms 741.363169ms 745.147446ms 746.813707ms 747.80808ms 749.505911ms 749.968639ms 751.925709ms 752.197057ms 753.47297ms 754.474031ms 755.184567ms 766.887538ms 769.046241ms 769.233895ms 770.957562ms 772.898777ms 775.454317ms 776.119139ms 782.785366ms 782.86392ms 788.904597ms 793.057785ms 793.96102ms 799.513278ms 800.705729ms 803.481024ms 806.353676ms 807.452368ms 809.508771ms 809.98322ms 811.075239ms 814.268862ms 818.362756ms 821.204605ms 843.57443ms 850.787505ms 855.502849ms 856.604661ms 858.908593ms 859.686909ms 868.757974ms 869.57339ms 874.122016ms 888.651045ms 904.198393ms 905.874954ms 907.740423ms 909.838562ms 911.021979ms 911.216875ms 911.761638ms 913.224159ms 913.95073ms 914.994269ms 916.095833ms 918.045288ms 918.693378ms 919.593991ms 921.337784ms 925.554565ms 926.596421ms 934.598671ms 940.949347ms 941.280433ms 942.120233ms 944.229149ms 945.982212ms 947.390881ms 954.369475ms 954.954767ms 957.192135ms 961.934969ms 965.39634ms 966.600725ms 970.917301ms 973.304699ms 976.362978ms 980.79813ms 984.345698ms 987.32092ms 995.206541ms 998.432833ms 1.002158569s 1.004754793s 1.006284814s 1.006777631s 1.010890379s 1.013963064s 1.015465027s 1.015755898s 1.018851828s 1.021233794s 1.023851792s 1.026553243s 1.02695873s 1.04050437s 1.054197098s 1.065858281s 1.082117503s 1.085133877s 1.094479754s 1.098045333s 1.099599035s 1.102109598s 1.102677685s 1.10482652s 1.111000245s 1.13188772s 1.137409329s 1.138427343s 1.152039654s 1.159644058s 1.160393424s 1.170114492s 1.184289083s 1.199872445s 1.202355475s 1.224061964s 1.242874393s 1.259285182s 1.291767648s 1.300372814s 1.36195886s 1.451088248s 1.497945021s 1.549743611s 1.559094166s]
Feb 18 22:13:48.280: INFO: 50 %ile: 807.452368ms
Feb 18 22:13:48.280: INFO: 90 %ile: 1.13188772s
Feb 18 22:13:48.280: INFO: 99 %ile: 1.549743611s
Feb 18 22:13:48.280: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:13:48.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-4613" for this suite.

• [SLOW TEST:31.672 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":278,"completed":157,"skipped":2610,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:13:48.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 18 22:13:48.408: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6105d23e-6b57-44d0-aec6-d25ab6c620b6" in namespace "projected-8021" to be "success or failure"
Feb 18 22:13:48.425: INFO: Pod "downwardapi-volume-6105d23e-6b57-44d0-aec6-d25ab6c620b6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.040729ms
Feb 18 22:13:50.432: INFO: Pod "downwardapi-volume-6105d23e-6b57-44d0-aec6-d25ab6c620b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023938019s
Feb 18 22:13:52.439: INFO: Pod "downwardapi-volume-6105d23e-6b57-44d0-aec6-d25ab6c620b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030985951s
Feb 18 22:13:54.519: INFO: Pod "downwardapi-volume-6105d23e-6b57-44d0-aec6-d25ab6c620b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111538505s
Feb 18 22:13:56.597: INFO: Pod "downwardapi-volume-6105d23e-6b57-44d0-aec6-d25ab6c620b6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.189329772s
Feb 18 22:13:58.613: INFO: Pod "downwardapi-volume-6105d23e-6b57-44d0-aec6-d25ab6c620b6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.205516668s
Feb 18 22:14:00.640: INFO: Pod "downwardapi-volume-6105d23e-6b57-44d0-aec6-d25ab6c620b6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.232101685s
Feb 18 22:14:02.650: INFO: Pod "downwardapi-volume-6105d23e-6b57-44d0-aec6-d25ab6c620b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.241849206s
STEP: Saw pod success
Feb 18 22:14:02.650: INFO: Pod "downwardapi-volume-6105d23e-6b57-44d0-aec6-d25ab6c620b6" satisfied condition "success or failure"
Feb 18 22:14:02.736: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6105d23e-6b57-44d0-aec6-d25ab6c620b6 container client-container: 
STEP: delete the pod
Feb 18 22:14:02.835: INFO: Waiting for pod downwardapi-volume-6105d23e-6b57-44d0-aec6-d25ab6c620b6 to disappear
Feb 18 22:14:02.913: INFO: Pod downwardapi-volume-6105d23e-6b57-44d0-aec6-d25ab6c620b6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:14:02.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8021" for this suite.

• [SLOW TEST:14.730 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2614,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:14:03.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 18 22:14:03.428: INFO: Waiting up to 5m0s for pod "pod-6f578335-671a-453a-8ede-3294a79aae36" in namespace "emptydir-2550" to be "success or failure"
Feb 18 22:14:03.562: INFO: Pod "pod-6f578335-671a-453a-8ede-3294a79aae36": Phase="Pending", Reason="", readiness=false. Elapsed: 134.276741ms
Feb 18 22:14:05.568: INFO: Pod "pod-6f578335-671a-453a-8ede-3294a79aae36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139860795s
Feb 18 22:14:07.707: INFO: Pod "pod-6f578335-671a-453a-8ede-3294a79aae36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.279338671s
Feb 18 22:14:09.772: INFO: Pod "pod-6f578335-671a-453a-8ede-3294a79aae36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.344049198s
Feb 18 22:14:11.784: INFO: Pod "pod-6f578335-671a-453a-8ede-3294a79aae36": Phase="Pending", Reason="", readiness=false. Elapsed: 8.35557032s
Feb 18 22:14:13.938: INFO: Pod "pod-6f578335-671a-453a-8ede-3294a79aae36": Phase="Pending", Reason="", readiness=false. Elapsed: 10.510402117s
Feb 18 22:14:16.017: INFO: Pod "pod-6f578335-671a-453a-8ede-3294a79aae36": Phase="Pending", Reason="", readiness=false. Elapsed: 12.588567682s
Feb 18 22:14:18.082: INFO: Pod "pod-6f578335-671a-453a-8ede-3294a79aae36": Phase="Pending", Reason="", readiness=false. Elapsed: 14.654386279s
Feb 18 22:14:20.136: INFO: Pod "pod-6f578335-671a-453a-8ede-3294a79aae36": Phase="Pending", Reason="", readiness=false. Elapsed: 16.707805572s
Feb 18 22:14:22.145: INFO: Pod "pod-6f578335-671a-453a-8ede-3294a79aae36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.717180952s
STEP: Saw pod success
Feb 18 22:14:22.145: INFO: Pod "pod-6f578335-671a-453a-8ede-3294a79aae36" satisfied condition "success or failure"
Feb 18 22:14:22.303: INFO: Trying to get logs from node jerma-node pod pod-6f578335-671a-453a-8ede-3294a79aae36 container test-container: 
STEP: delete the pod
Feb 18 22:14:22.567: INFO: Waiting for pod pod-6f578335-671a-453a-8ede-3294a79aae36 to disappear
Feb 18 22:14:22.603: INFO: Pod pod-6f578335-671a-453a-8ede-3294a79aae36 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:14:22.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2550" for this suite.

• [SLOW TEST:19.720 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2621,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:14:22.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-3f662ee3-bb9e-4f0a-a6d1-179e5a6126f2
STEP: Creating a pod to test consume secrets
Feb 18 22:14:23.178: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0711a809-48ef-4bb5-8668-4619cfad9f14" in namespace "projected-9991" to be "success or failure"
Feb 18 22:14:23.196: INFO: Pod "pod-projected-secrets-0711a809-48ef-4bb5-8668-4619cfad9f14": Phase="Pending", Reason="", readiness=false. Elapsed: 17.970431ms
Feb 18 22:14:25.357: INFO: Pod "pod-projected-secrets-0711a809-48ef-4bb5-8668-4619cfad9f14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179115298s
Feb 18 22:14:27.375: INFO: Pod "pod-projected-secrets-0711a809-48ef-4bb5-8668-4619cfad9f14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196997806s
Feb 18 22:14:29.386: INFO: Pod "pod-projected-secrets-0711a809-48ef-4bb5-8668-4619cfad9f14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207985629s
Feb 18 22:14:32.477: INFO: Pod "pod-projected-secrets-0711a809-48ef-4bb5-8668-4619cfad9f14": Phase="Pending", Reason="", readiness=false. Elapsed: 9.299155112s
Feb 18 22:14:34.487: INFO: Pod "pod-projected-secrets-0711a809-48ef-4bb5-8668-4619cfad9f14": Phase="Pending", Reason="", readiness=false. Elapsed: 11.3090197s
Feb 18 22:14:36.496: INFO: Pod "pod-projected-secrets-0711a809-48ef-4bb5-8668-4619cfad9f14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.318302605s
STEP: Saw pod success
Feb 18 22:14:36.496: INFO: Pod "pod-projected-secrets-0711a809-48ef-4bb5-8668-4619cfad9f14" satisfied condition "success or failure"
Feb 18 22:14:36.504: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-0711a809-48ef-4bb5-8668-4619cfad9f14 container projected-secret-volume-test: 
STEP: delete the pod
Feb 18 22:14:36.591: INFO: Waiting for pod pod-projected-secrets-0711a809-48ef-4bb5-8668-4619cfad9f14 to disappear
Feb 18 22:14:36.596: INFO: Pod pod-projected-secrets-0711a809-48ef-4bb5-8668-4619cfad9f14 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:14:36.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9991" for this suite.

• [SLOW TEST:13.866 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2624,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:14:36.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 18 22:14:57.003: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3145 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 22:14:57.003: INFO: >>> kubeConfig: /root/.kube/config
I0218 22:14:57.052198       8 log.go:172] (0xc0061e0370) (0xc000516f00) Create stream
I0218 22:14:57.052462       8 log.go:172] (0xc0061e0370) (0xc000516f00) Stream added, broadcasting: 1
I0218 22:14:57.056391       8 log.go:172] (0xc0061e0370) Reply frame received for 1
I0218 22:14:57.056424       8 log.go:172] (0xc0061e0370) (0xc0014503c0) Create stream
I0218 22:14:57.056438       8 log.go:172] (0xc0061e0370) (0xc0014503c0) Stream added, broadcasting: 3
I0218 22:14:57.057683       8 log.go:172] (0xc0061e0370) Reply frame received for 3
I0218 22:14:57.057704       8 log.go:172] (0xc0061e0370) (0xc000517040) Create stream
I0218 22:14:57.057712       8 log.go:172] (0xc0061e0370) (0xc000517040) Stream added, broadcasting: 5
I0218 22:14:57.059297       8 log.go:172] (0xc0061e0370) Reply frame received for 5
I0218 22:14:57.122334       8 log.go:172] (0xc0061e0370) Data frame received for 3
I0218 22:14:57.122426       8 log.go:172] (0xc0014503c0) (3) Data frame handling
I0218 22:14:57.122457       8 log.go:172] (0xc0014503c0) (3) Data frame sent
I0218 22:14:57.190420       8 log.go:172] (0xc0061e0370) (0xc0014503c0) Stream removed, broadcasting: 3
I0218 22:14:57.190532       8 log.go:172] (0xc0061e0370) Data frame received for 1
I0218 22:14:57.190572       8 log.go:172] (0xc000516f00) (1) Data frame handling
I0218 22:14:57.190586       8 log.go:172] (0xc0061e0370) (0xc000517040) Stream removed, broadcasting: 5
I0218 22:14:57.190602       8 log.go:172] (0xc000516f00) (1) Data frame sent
I0218 22:14:57.190616       8 log.go:172] (0xc0061e0370) (0xc000516f00) Stream removed, broadcasting: 1
I0218 22:14:57.190701       8 log.go:172] (0xc0061e0370) Go away received
I0218 22:14:57.191004       8 log.go:172] (0xc0061e0370) (0xc000516f00) Stream removed, broadcasting: 1
I0218 22:14:57.191027       8 log.go:172] (0xc0061e0370) (0xc0014503c0) Stream removed, broadcasting: 3
I0218 22:14:57.191057       8 log.go:172] (0xc0061e0370) (0xc000517040) Stream removed, broadcasting: 5
Feb 18 22:14:57.191: INFO: Exec stderr: ""
Feb 18 22:14:57.191: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3145 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 22:14:57.191: INFO: >>> kubeConfig: /root/.kube/config
I0218 22:14:57.239844       8 log.go:172] (0xc0011ae370) (0xc001389220) Create stream
I0218 22:14:57.239929       8 log.go:172] (0xc0011ae370) (0xc001389220) Stream added, broadcasting: 1
I0218 22:14:57.243436       8 log.go:172] (0xc0011ae370) Reply frame received for 1
I0218 22:14:57.243480       8 log.go:172] (0xc0011ae370) (0xc000517e00) Create stream
I0218 22:14:57.243488       8 log.go:172] (0xc0011ae370) (0xc000517e00) Stream added, broadcasting: 3
I0218 22:14:57.244380       8 log.go:172] (0xc0011ae370) Reply frame received for 3
I0218 22:14:57.244405       8 log.go:172] (0xc0011ae370) (0xc0013892c0) Create stream
I0218 22:14:57.244415       8 log.go:172] (0xc0011ae370) (0xc0013892c0) Stream added, broadcasting: 5
I0218 22:14:57.245404       8 log.go:172] (0xc0011ae370) Reply frame received for 5
I0218 22:14:57.304645       8 log.go:172] (0xc0011ae370) Data frame received for 3
I0218 22:14:57.304865       8 log.go:172] (0xc000517e00) (3) Data frame handling
I0218 22:14:57.304913       8 log.go:172] (0xc000517e00) (3) Data frame sent
I0218 22:14:57.379674       8 log.go:172] (0xc0011ae370) Data frame received for 1
I0218 22:14:57.379716       8 log.go:172] (0xc001389220) (1) Data frame handling
I0218 22:14:57.379737       8 log.go:172] (0xc001389220) (1) Data frame sent
I0218 22:14:57.379754       8 log.go:172] (0xc0011ae370) (0xc001389220) Stream removed, broadcasting: 1
I0218 22:14:57.379975       8 log.go:172] (0xc0011ae370) (0xc000517e00) Stream removed, broadcasting: 3
I0218 22:14:57.380010       8 log.go:172] (0xc0011ae370) (0xc0013892c0) Stream removed, broadcasting: 5
I0218 22:14:57.380063       8 log.go:172] (0xc0011ae370) (0xc001389220) Stream removed, broadcasting: 1
I0218 22:14:57.380075       8 log.go:172] (0xc0011ae370) (0xc000517e00) Stream removed, broadcasting: 3
I0218 22:14:57.380086       8 log.go:172] (0xc0011ae370) (0xc0013892c0) Stream removed, broadcasting: 5
Feb 18 22:14:57.380: INFO: Exec stderr: ""
I0218 22:14:57.380196       8 log.go:172] (0xc0011ae370) Go away received
Feb 18 22:14:57.380: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3145 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 22:14:57.380: INFO: >>> kubeConfig: /root/.kube/config
I0218 22:14:57.423946       8 log.go:172] (0xc00279b970) (0xc001e84460) Create stream
I0218 22:14:57.424031       8 log.go:172] (0xc00279b970) (0xc001e84460) Stream added, broadcasting: 1
I0218 22:14:57.429869       8 log.go:172] (0xc00279b970) Reply frame received for 1
I0218 22:14:57.429933       8 log.go:172] (0xc00279b970) (0xc0013895e0) Create stream
I0218 22:14:57.429947       8 log.go:172] (0xc00279b970) (0xc0013895e0) Stream added, broadcasting: 3
I0218 22:14:57.432360       8 log.go:172] (0xc00279b970) Reply frame received for 3
I0218 22:14:57.432407       8 log.go:172] (0xc00279b970) (0xc00112f900) Create stream
I0218 22:14:57.432422       8 log.go:172] (0xc00279b970) (0xc00112f900) Stream added, broadcasting: 5
I0218 22:14:57.435319       8 log.go:172] (0xc00279b970) Reply frame received for 5
I0218 22:14:57.505596       8 log.go:172] (0xc00279b970) Data frame received for 3
I0218 22:14:57.505996       8 log.go:172] (0xc0013895e0) (3) Data frame handling
I0218 22:14:57.506082       8 log.go:172] (0xc0013895e0) (3) Data frame sent
I0218 22:14:57.573799       8 log.go:172] (0xc00279b970) (0xc00112f900) Stream removed, broadcasting: 5
I0218 22:14:57.573892       8 log.go:172] (0xc00279b970) Data frame received for 1
I0218 22:14:57.573906       8 log.go:172] (0xc001e84460) (1) Data frame handling
I0218 22:14:57.573916       8 log.go:172] (0xc001e84460) (1) Data frame sent
I0218 22:14:57.573931       8 log.go:172] (0xc00279b970) (0xc0013895e0) Stream removed, broadcasting: 3
I0218 22:14:57.573949       8 log.go:172] (0xc00279b970) (0xc001e84460) Stream removed, broadcasting: 1
I0218 22:14:57.573971       8 log.go:172] (0xc00279b970) Go away received
I0218 22:14:57.574252       8 log.go:172] (0xc00279b970) (0xc001e84460) Stream removed, broadcasting: 1
I0218 22:14:57.574307       8 log.go:172] (0xc00279b970) (0xc0013895e0) Stream removed, broadcasting: 3
I0218 22:14:57.574333       8 log.go:172] (0xc00279b970) (0xc00112f900) Stream removed, broadcasting: 5
Feb 18 22:14:57.574: INFO: Exec stderr: ""
Feb 18 22:14:57.574: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3145 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 22:14:57.574: INFO: >>> kubeConfig: /root/.kube/config
I0218 22:14:57.621719       8 log.go:172] (0xc0026be210) (0xc00110af00) Create stream
I0218 22:14:57.621768       8 log.go:172] (0xc0026be210) (0xc00110af00) Stream added, broadcasting: 1
I0218 22:14:57.624397       8 log.go:172] (0xc0026be210) Reply frame received for 1
I0218 22:14:57.624438       8 log.go:172] (0xc0026be210) (0xc001e84500) Create stream
I0218 22:14:57.624447       8 log.go:172] (0xc0026be210) (0xc001e84500) Stream added, broadcasting: 3
I0218 22:14:57.625698       8 log.go:172] (0xc0026be210) Reply frame received for 3
I0218 22:14:57.625843       8 log.go:172] (0xc0026be210) (0xc001389680) Create stream
I0218 22:14:57.625868       8 log.go:172] (0xc0026be210) (0xc001389680) Stream added, broadcasting: 5
I0218 22:14:57.626896       8 log.go:172] (0xc0026be210) Reply frame received for 5
I0218 22:14:57.695883       8 log.go:172] (0xc0026be210) Data frame received for 3
I0218 22:14:57.696064       8 log.go:172] (0xc001e84500) (3) Data frame handling
I0218 22:14:57.696201       8 log.go:172] (0xc001e84500) (3) Data frame sent
I0218 22:14:57.809922       8 log.go:172] (0xc0026be210) (0xc001e84500) Stream removed, broadcasting: 3
I0218 22:14:57.810081       8 log.go:172] (0xc0026be210) Data frame received for 1
I0218 22:14:57.810091       8 log.go:172] (0xc00110af00) (1) Data frame handling
I0218 22:14:57.810106       8 log.go:172] (0xc00110af00) (1) Data frame sent
I0218 22:14:57.810117       8 log.go:172] (0xc0026be210) (0xc00110af00) Stream removed, broadcasting: 1
I0218 22:14:57.810239       8 log.go:172] (0xc0026be210) (0xc001389680) Stream removed, broadcasting: 5
I0218 22:14:57.810273       8 log.go:172] (0xc0026be210) (0xc00110af00) Stream removed, broadcasting: 1
I0218 22:14:57.810283       8 log.go:172] (0xc0026be210) (0xc001e84500) Stream removed, broadcasting: 3
I0218 22:14:57.810293       8 log.go:172] (0xc0026be210) (0xc001389680) Stream removed, broadcasting: 5
Feb 18 22:14:57.810: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 18 22:14:57.810: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3145 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 22:14:57.810: INFO: >>> kubeConfig: /root/.kube/config
I0218 22:14:57.813304       8 log.go:172] (0xc0026be210) Go away received
I0218 22:14:57.851999       8 log.go:172] (0xc0026d2160) (0xc001e84780) Create stream
I0218 22:14:57.852117       8 log.go:172] (0xc0026d2160) (0xc001e84780) Stream added, broadcasting: 1
I0218 22:14:57.860928       8 log.go:172] (0xc0026d2160) Reply frame received for 1
I0218 22:14:57.861094       8 log.go:172] (0xc0026d2160) (0xc0015cc000) Create stream
I0218 22:14:57.861132       8 log.go:172] (0xc0026d2160) (0xc0015cc000) Stream added, broadcasting: 3
I0218 22:14:57.862958       8 log.go:172] (0xc0026d2160) Reply frame received for 3
I0218 22:14:57.863053       8 log.go:172] (0xc0026d2160) (0xc0015cc0a0) Create stream
I0218 22:14:57.863094       8 log.go:172] (0xc0026d2160) (0xc0015cc0a0) Stream added, broadcasting: 5
I0218 22:14:57.865460       8 log.go:172] (0xc0026d2160) Reply frame received for 5
I0218 22:14:57.933398       8 log.go:172] (0xc0026d2160) Data frame received for 3
I0218 22:14:57.933491       8 log.go:172] (0xc0015cc000) (3) Data frame handling
I0218 22:14:57.933528       8 log.go:172] (0xc0015cc000) (3) Data frame sent
I0218 22:14:58.012606       8 log.go:172] (0xc0026d2160) (0xc0015cc0a0) Stream removed, broadcasting: 5
I0218 22:14:58.012722       8 log.go:172] (0xc0026d2160) Data frame received for 1
I0218 22:14:58.012769       8 log.go:172] (0xc0026d2160) (0xc0015cc000) Stream removed, broadcasting: 3
I0218 22:14:58.012834       8 log.go:172] (0xc001e84780) (1) Data frame handling
I0218 22:14:58.012900       8 log.go:172] (0xc001e84780) (1) Data frame sent
I0218 22:14:58.012917       8 log.go:172] (0xc0026d2160) (0xc001e84780) Stream removed, broadcasting: 1
I0218 22:14:58.012935       8 log.go:172] (0xc0026d2160) Go away received
I0218 22:14:58.013464       8 log.go:172] (0xc0026d2160) (0xc001e84780) Stream removed, broadcasting: 1
I0218 22:14:58.013479       8 log.go:172] (0xc0026d2160) (0xc0015cc000) Stream removed, broadcasting: 3
I0218 22:14:58.013487       8 log.go:172] (0xc0026d2160) (0xc0015cc0a0) Stream removed, broadcasting: 5
Feb 18 22:14:58.013: INFO: Exec stderr: ""
Feb 18 22:14:58.013: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3145 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 22:14:58.013: INFO: >>> kubeConfig: /root/.kube/config
I0218 22:14:58.048044       8 log.go:172] (0xc0011aea50) (0xc000b14460) Create stream
I0218 22:14:58.048123       8 log.go:172] (0xc0011aea50) (0xc000b14460) Stream added, broadcasting: 1
I0218 22:14:58.050628       8 log.go:172] (0xc0011aea50) Reply frame received for 1
I0218 22:14:58.050721       8 log.go:172] (0xc0011aea50) (0xc001e84820) Create stream
I0218 22:14:58.050729       8 log.go:172] (0xc0011aea50) (0xc001e84820) Stream added, broadcasting: 3
I0218 22:14:58.051743       8 log.go:172] (0xc0011aea50) Reply frame received for 3
I0218 22:14:58.051765       8 log.go:172] (0xc0011aea50) (0xc001e848c0) Create stream
I0218 22:14:58.051773       8 log.go:172] (0xc0011aea50) (0xc001e848c0) Stream added, broadcasting: 5
I0218 22:14:58.053032       8 log.go:172] (0xc0011aea50) Reply frame received for 5
I0218 22:14:58.134812       8 log.go:172] (0xc0011aea50) Data frame received for 3
I0218 22:14:58.135009       8 log.go:172] (0xc001e84820) (3) Data frame handling
I0218 22:14:58.135089       8 log.go:172] (0xc001e84820) (3) Data frame sent
I0218 22:14:58.203316       8 log.go:172] (0xc0011aea50) Data frame received for 1
I0218 22:14:58.203593       8 log.go:172] (0xc0011aea50) (0xc001e84820) Stream removed, broadcasting: 3
I0218 22:14:58.203690       8 log.go:172] (0xc000b14460) (1) Data frame handling
I0218 22:14:58.203737       8 log.go:172] (0xc0011aea50) (0xc001e848c0) Stream removed, broadcasting: 5
I0218 22:14:58.203798       8 log.go:172] (0xc000b14460) (1) Data frame sent
I0218 22:14:58.203821       8 log.go:172] (0xc0011aea50) (0xc000b14460) Stream removed, broadcasting: 1
I0218 22:14:58.203856       8 log.go:172] (0xc0011aea50) Go away received
I0218 22:14:58.204312       8 log.go:172] (0xc0011aea50) (0xc000b14460) Stream removed, broadcasting: 1
I0218 22:14:58.204351       8 log.go:172] (0xc0011aea50) (0xc001e84820) Stream removed, broadcasting: 3
I0218 22:14:58.204364       8 log.go:172] (0xc0011aea50) (0xc001e848c0) Stream removed, broadcasting: 5
Feb 18 22:14:58.204: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 18 22:14:58.204: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3145 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 22:14:58.204: INFO: >>> kubeConfig: /root/.kube/config
I0218 22:14:58.259447       8 log.go:172] (0xc002b06420) (0xc001450e60) Create stream
I0218 22:14:58.259518       8 log.go:172] (0xc002b06420) (0xc001450e60) Stream added, broadcasting: 1
I0218 22:14:58.263757       8 log.go:172] (0xc002b06420) Reply frame received for 1
I0218 22:14:58.263789       8 log.go:172] (0xc002b06420) (0xc0015cc1e0) Create stream
I0218 22:14:58.263800       8 log.go:172] (0xc002b06420) (0xc0015cc1e0) Stream added, broadcasting: 3
I0218 22:14:58.265771       8 log.go:172] (0xc002b06420) Reply frame received for 3
I0218 22:14:58.265889       8 log.go:172] (0xc002b06420) (0xc0015cca00) Create stream
I0218 22:14:58.265904       8 log.go:172] (0xc002b06420) (0xc0015cca00) Stream added, broadcasting: 5
I0218 22:14:58.267777       8 log.go:172] (0xc002b06420) Reply frame received for 5
I0218 22:14:58.363225       8 log.go:172] (0xc002b06420) Data frame received for 3
I0218 22:14:58.363442       8 log.go:172] (0xc0015cc1e0) (3) Data frame handling
I0218 22:14:58.363520       8 log.go:172] (0xc0015cc1e0) (3) Data frame sent
I0218 22:14:58.430028       8 log.go:172] (0xc002b06420) Data frame received for 1
I0218 22:14:58.430151       8 log.go:172] (0xc002b06420) (0xc0015cc1e0) Stream removed, broadcasting: 3
I0218 22:14:58.430257       8 log.go:172] (0xc001450e60) (1) Data frame handling
I0218 22:14:58.430314       8 log.go:172] (0xc001450e60) (1) Data frame sent
I0218 22:14:58.430464       8 log.go:172] (0xc002b06420) (0xc0015cca00) Stream removed, broadcasting: 5
I0218 22:14:58.430728       8 log.go:172] (0xc002b06420) (0xc001450e60) Stream removed, broadcasting: 1
I0218 22:14:58.430753       8 log.go:172] (0xc002b06420) Go away received
I0218 22:14:58.431403       8 log.go:172] (0xc002b06420) (0xc001450e60) Stream removed, broadcasting: 1
I0218 22:14:58.431444       8 log.go:172] (0xc002b06420) (0xc0015cc1e0) Stream removed, broadcasting: 3
I0218 22:14:58.431457       8 log.go:172] (0xc002b06420) (0xc0015cca00) Stream removed, broadcasting: 5
Feb 18 22:14:58.431: INFO: Exec stderr: ""
Feb 18 22:14:58.431: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3145 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 22:14:58.431: INFO: >>> kubeConfig: /root/.kube/config
I0218 22:14:58.479527       8 log.go:172] (0xc001fb82c0) (0xc0015cd2c0) Create stream
I0218 22:14:58.479796       8 log.go:172] (0xc001fb82c0) (0xc0015cd2c0) Stream added, broadcasting: 1
I0218 22:14:58.486964       8 log.go:172] (0xc001fb82c0) Reply frame received for 1
I0218 22:14:58.487026       8 log.go:172] (0xc001fb82c0) (0xc001e84960) Create stream
I0218 22:14:58.487037       8 log.go:172] (0xc001fb82c0) (0xc001e84960) Stream added, broadcasting: 3
I0218 22:14:58.488645       8 log.go:172] (0xc001fb82c0) Reply frame received for 3
I0218 22:14:58.488688       8 log.go:172] (0xc001fb82c0) (0xc000b14960) Create stream
I0218 22:14:58.488705       8 log.go:172] (0xc001fb82c0) (0xc000b14960) Stream added, broadcasting: 5
I0218 22:14:58.490617       8 log.go:172] (0xc001fb82c0) Reply frame received for 5
I0218 22:14:58.574647       8 log.go:172] (0xc001fb82c0) Data frame received for 3
I0218 22:14:58.574840       8 log.go:172] (0xc001e84960) (3) Data frame handling
I0218 22:14:58.574916       8 log.go:172] (0xc001e84960) (3) Data frame sent
I0218 22:14:58.689130       8 log.go:172] (0xc001fb82c0) Data frame received for 1
I0218 22:14:58.689317       8 log.go:172] (0xc001fb82c0) (0xc001e84960) Stream removed, broadcasting: 3
I0218 22:14:58.689407       8 log.go:172] (0xc0015cd2c0) (1) Data frame handling
I0218 22:14:58.689439       8 log.go:172] (0xc001fb82c0) (0xc000b14960) Stream removed, broadcasting: 5
I0218 22:14:58.689503       8 log.go:172] (0xc0015cd2c0) (1) Data frame sent
I0218 22:14:58.689531       8 log.go:172] (0xc001fb82c0) (0xc0015cd2c0) Stream removed, broadcasting: 1
I0218 22:14:58.689562       8 log.go:172] (0xc001fb82c0) Go away received
I0218 22:14:58.689919       8 log.go:172] (0xc001fb82c0) (0xc0015cd2c0) Stream removed, broadcasting: 1
I0218 22:14:58.689954       8 log.go:172] (0xc001fb82c0) (0xc001e84960) Stream removed, broadcasting: 3
I0218 22:14:58.689976       8 log.go:172] (0xc001fb82c0) (0xc000b14960) Stream removed, broadcasting: 5
Feb 18 22:14:58.690: INFO: Exec stderr: ""
Feb 18 22:14:58.690: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3145 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 22:14:58.690: INFO: >>> kubeConfig: /root/.kube/config
I0218 22:14:58.728060       8 log.go:172] (0xc0011af290) (0xc000b14e60) Create stream
I0218 22:14:58.728297       8 log.go:172] (0xc0011af290) (0xc000b14e60) Stream added, broadcasting: 1
I0218 22:14:58.732457       8 log.go:172] (0xc0011af290) Reply frame received for 1
I0218 22:14:58.732508       8 log.go:172] (0xc0011af290) (0xc001e84aa0) Create stream
I0218 22:14:58.732518       8 log.go:172] (0xc0011af290) (0xc001e84aa0) Stream added, broadcasting: 3
I0218 22:14:58.733524       8 log.go:172] (0xc0011af290) Reply frame received for 3
I0218 22:14:58.733554       8 log.go:172] (0xc0011af290) (0xc001e84c80) Create stream
I0218 22:14:58.733562       8 log.go:172] (0xc0011af290) (0xc001e84c80) Stream added, broadcasting: 5
I0218 22:14:58.734853       8 log.go:172] (0xc0011af290) Reply frame received for 5
I0218 22:14:58.808773       8 log.go:172] (0xc0011af290) Data frame received for 3
I0218 22:14:58.808821       8 log.go:172] (0xc001e84aa0) (3) Data frame handling
I0218 22:14:58.808839       8 log.go:172] (0xc001e84aa0) (3) Data frame sent
I0218 22:14:58.884276       8 log.go:172] (0xc0011af290) Data frame received for 1
I0218 22:14:58.884373       8 log.go:172] (0xc0011af290) (0xc001e84aa0) Stream removed, broadcasting: 3
I0218 22:14:58.884421       8 log.go:172] (0xc000b14e60) (1) Data frame handling
I0218 22:14:58.884464       8 log.go:172] (0xc0011af290) (0xc001e84c80) Stream removed, broadcasting: 5
I0218 22:14:58.884533       8 log.go:172] (0xc000b14e60) (1) Data frame sent
I0218 22:14:58.884553       8 log.go:172] (0xc0011af290) (0xc000b14e60) Stream removed, broadcasting: 1
I0218 22:14:58.884573       8 log.go:172] (0xc0011af290) Go away received
I0218 22:14:58.884838       8 log.go:172] (0xc0011af290) (0xc000b14e60) Stream removed, broadcasting: 1
I0218 22:14:58.884854       8 log.go:172] (0xc0011af290) (0xc001e84aa0) Stream removed, broadcasting: 3
I0218 22:14:58.884860       8 log.go:172] (0xc0011af290) (0xc001e84c80) Stream removed, broadcasting: 5
Feb 18 22:14:58.884: INFO: Exec stderr: ""
Feb 18 22:14:58.885: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3145 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 22:14:58.885: INFO: >>> kubeConfig: /root/.kube/config
I0218 22:14:58.929971       8 log.go:172] (0xc001fb89a0) (0xc0015cda40) Create stream
I0218 22:14:58.930100       8 log.go:172] (0xc001fb89a0) (0xc0015cda40) Stream added, broadcasting: 1
I0218 22:14:58.936431       8 log.go:172] (0xc001fb89a0) Reply frame received for 1
I0218 22:14:58.936537       8 log.go:172] (0xc001fb89a0) (0xc001e84d20) Create stream
I0218 22:14:58.936549       8 log.go:172] (0xc001fb89a0) (0xc001e84d20) Stream added, broadcasting: 3
I0218 22:14:58.938400       8 log.go:172] (0xc001fb89a0) Reply frame received for 3
I0218 22:14:58.938429       8 log.go:172] (0xc001fb89a0) (0xc001450fa0) Create stream
I0218 22:14:58.938465       8 log.go:172] (0xc001fb89a0) (0xc001450fa0) Stream added, broadcasting: 5
I0218 22:14:58.940428       8 log.go:172] (0xc001fb89a0) Reply frame received for 5
I0218 22:14:59.031764       8 log.go:172] (0xc001fb89a0) Data frame received for 3
I0218 22:14:59.031863       8 log.go:172] (0xc001e84d20) (3) Data frame handling
I0218 22:14:59.031909       8 log.go:172] (0xc001e84d20) (3) Data frame sent
I0218 22:14:59.114187       8 log.go:172] (0xc001fb89a0) (0xc001450fa0) Stream removed, broadcasting: 5
I0218 22:14:59.114400       8 log.go:172] (0xc001fb89a0) Data frame received for 1
I0218 22:14:59.114455       8 log.go:172] (0xc001fb89a0) (0xc001e84d20) Stream removed, broadcasting: 3
I0218 22:14:59.114511       8 log.go:172] (0xc0015cda40) (1) Data frame handling
I0218 22:14:59.114599       8 log.go:172] (0xc0015cda40) (1) Data frame sent
I0218 22:14:59.114623       8 log.go:172] (0xc001fb89a0) (0xc0015cda40) Stream removed, broadcasting: 1
I0218 22:14:59.114661       8 log.go:172] (0xc001fb89a0) Go away received
I0218 22:14:59.114972       8 log.go:172] (0xc001fb89a0) (0xc0015cda40) Stream removed, broadcasting: 1
I0218 22:14:59.114992       8 log.go:172] (0xc001fb89a0) (0xc001e84d20) Stream removed, broadcasting: 3
I0218 22:14:59.115059       8 log.go:172] (0xc001fb89a0) (0xc001450fa0) Stream removed, broadcasting: 5
Feb 18 22:14:59.115: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:14:59.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-3145" for this suite.

• [SLOW TEST:22.516 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2638,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:14:59.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 18 22:14:59.196: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52ee98cf-4ca5-47e2-a4ef-fc2bb6018246" in namespace "downward-api-5920" to be "success or failure"
Feb 18 22:14:59.200: INFO: Pod "downwardapi-volume-52ee98cf-4ca5-47e2-a4ef-fc2bb6018246": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148485ms
Feb 18 22:15:01.206: INFO: Pod "downwardapi-volume-52ee98cf-4ca5-47e2-a4ef-fc2bb6018246": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010277422s
Feb 18 22:15:03.217: INFO: Pod "downwardapi-volume-52ee98cf-4ca5-47e2-a4ef-fc2bb6018246": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021635856s
Feb 18 22:15:06.223: INFO: Pod "downwardapi-volume-52ee98cf-4ca5-47e2-a4ef-fc2bb6018246": Phase="Pending", Reason="", readiness=false. Elapsed: 7.027292303s
Feb 18 22:15:08.228: INFO: Pod "downwardapi-volume-52ee98cf-4ca5-47e2-a4ef-fc2bb6018246": Phase="Pending", Reason="", readiness=false. Elapsed: 9.03265422s
Feb 18 22:15:10.236: INFO: Pod "downwardapi-volume-52ee98cf-4ca5-47e2-a4ef-fc2bb6018246": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.040142879s
STEP: Saw pod success
Feb 18 22:15:10.236: INFO: Pod "downwardapi-volume-52ee98cf-4ca5-47e2-a4ef-fc2bb6018246" satisfied condition "success or failure"
Feb 18 22:15:10.240: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod downwardapi-volume-52ee98cf-4ca5-47e2-a4ef-fc2bb6018246 container client-container: 
STEP: delete the pod
Feb 18 22:15:10.490: INFO: Waiting for pod downwardapi-volume-52ee98cf-4ca5-47e2-a4ef-fc2bb6018246 to disappear
Feb 18 22:15:10.648: INFO: Pod downwardapi-volume-52ee98cf-4ca5-47e2-a4ef-fc2bb6018246 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:15:10.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5920" for this suite.

• [SLOW TEST:11.540 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2647,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:15:10.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-4443365d-3401-4b14-8ee8-1b93f6344a29
Feb 18 22:15:10.883: INFO: Pod name my-hostname-basic-4443365d-3401-4b14-8ee8-1b93f6344a29: Found 0 pods out of 1
Feb 18 22:15:16.243: INFO: Pod name my-hostname-basic-4443365d-3401-4b14-8ee8-1b93f6344a29: Found 1 pods out of 1
Feb 18 22:15:16.243: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-4443365d-3401-4b14-8ee8-1b93f6344a29" are running
Feb 18 22:15:20.829: INFO: Pod "my-hostname-basic-4443365d-3401-4b14-8ee8-1b93f6344a29-2hc9m" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-18 22:15:10 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-18 22:15:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-4443365d-3401-4b14-8ee8-1b93f6344a29]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-18 22:15:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-4443365d-3401-4b14-8ee8-1b93f6344a29]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-18 22:15:10 +0000 UTC Reason: Message:}])
Feb 18 22:15:20.829: INFO: Trying to dial the pod
Feb 18 22:15:25.858: INFO: Controller my-hostname-basic-4443365d-3401-4b14-8ee8-1b93f6344a29: Got expected result from replica 1 [my-hostname-basic-4443365d-3401-4b14-8ee8-1b93f6344a29-2hc9m]: "my-hostname-basic-4443365d-3401-4b14-8ee8-1b93f6344a29-2hc9m", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:15:25.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5989" for this suite.

• [SLOW TEST:15.221 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":163,"skipped":2667,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:15:25.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 18 22:15:26.047: INFO: Waiting up to 5m0s for pod "pod-2e9866c4-11a7-428d-892f-2ced12aaf3b6" in namespace "emptydir-2092" to be "success or failure"
Feb 18 22:15:26.071: INFO: Pod "pod-2e9866c4-11a7-428d-892f-2ced12aaf3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 24.237324ms
Feb 18 22:15:28.075: INFO: Pod "pod-2e9866c4-11a7-428d-892f-2ced12aaf3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028313783s
Feb 18 22:15:30.080: INFO: Pod "pod-2e9866c4-11a7-428d-892f-2ced12aaf3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033498699s
Feb 18 22:15:32.087: INFO: Pod "pod-2e9866c4-11a7-428d-892f-2ced12aaf3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039691096s
Feb 18 22:15:34.754: INFO: Pod "pod-2e9866c4-11a7-428d-892f-2ced12aaf3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.706795125s
Feb 18 22:15:36.995: INFO: Pod "pod-2e9866c4-11a7-428d-892f-2ced12aaf3b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.948072817s
STEP: Saw pod success
Feb 18 22:15:36.995: INFO: Pod "pod-2e9866c4-11a7-428d-892f-2ced12aaf3b6" satisfied condition "success or failure"
Feb 18 22:15:37.008: INFO: Trying to get logs from node jerma-node pod pod-2e9866c4-11a7-428d-892f-2ced12aaf3b6 container test-container: 
STEP: delete the pod
Feb 18 22:15:37.518: INFO: Waiting for pod pod-2e9866c4-11a7-428d-892f-2ced12aaf3b6 to disappear
Feb 18 22:15:37.528: INFO: Pod pod-2e9866c4-11a7-428d-892f-2ced12aaf3b6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:15:37.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2092" for this suite.

• [SLOW TEST:11.679 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2687,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:15:37.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 22:15:37.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:15:47.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6844" for this suite.

• [SLOW TEST:10.253 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2700,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:15:47.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 22:15:47.933: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:15:49.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5459" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":166,"skipped":2700,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:15:49.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Feb 18 22:15:49.256: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1219" to be "success or failure"
Feb 18 22:15:49.273: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.383674ms
Feb 18 22:15:51.285: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029373009s
Feb 18 22:15:53.301: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044970209s
Feb 18 22:15:55.308: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052730778s
Feb 18 22:15:57.322: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066777932s
Feb 18 22:15:59.330: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074351301s
STEP: Saw pod success
Feb 18 22:15:59.330: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 18 22:15:59.337: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 18 22:15:59.458: INFO: Waiting for pod pod-host-path-test to disappear
Feb 18 22:15:59.476: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:15:59.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-1219" for this suite.

• [SLOW TEST:10.410 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2700,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:15:59.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-eae5109b-fbd9-4094-8cea-3fde6e506d4c
STEP: Creating a pod to test consume secrets
Feb 18 22:15:59.831: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-60b341ad-6991-41c5-af4b-0b213878910f" in namespace "projected-2439" to be "success or failure"
Feb 18 22:15:59.841: INFO: Pod "pod-projected-secrets-60b341ad-6991-41c5-af4b-0b213878910f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.146245ms
Feb 18 22:16:01.850: INFO: Pod "pod-projected-secrets-60b341ad-6991-41c5-af4b-0b213878910f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018457388s
Feb 18 22:16:03.862: INFO: Pod "pod-projected-secrets-60b341ad-6991-41c5-af4b-0b213878910f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030651841s
Feb 18 22:16:05.895: INFO: Pod "pod-projected-secrets-60b341ad-6991-41c5-af4b-0b213878910f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063758777s
Feb 18 22:16:07.903: INFO: Pod "pod-projected-secrets-60b341ad-6991-41c5-af4b-0b213878910f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071511333s
STEP: Saw pod success
Feb 18 22:16:07.903: INFO: Pod "pod-projected-secrets-60b341ad-6991-41c5-af4b-0b213878910f" satisfied condition "success or failure"
Feb 18 22:16:07.911: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-60b341ad-6991-41c5-af4b-0b213878910f container projected-secret-volume-test: 
STEP: delete the pod
Feb 18 22:16:07.985: INFO: Waiting for pod pod-projected-secrets-60b341ad-6991-41c5-af4b-0b213878910f to disappear
Feb 18 22:16:07.997: INFO: Pod pod-projected-secrets-60b341ad-6991-41c5-af4b-0b213878910f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:16:07.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2439" for this suite.

• [SLOW TEST:8.503 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2731,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:16:08.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:16:08.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-249" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":169,"skipped":2752,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:16:08.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-d494b063-55d1-4ae5-841c-4e80c040dac5
STEP: Creating a pod to test consume secrets
Feb 18 22:16:08.446: INFO: Waiting up to 5m0s for pod "pod-secrets-c7105300-3684-426e-9f90-809ec7b10527" in namespace "secrets-5673" to be "success or failure"
Feb 18 22:16:08.469: INFO: Pod "pod-secrets-c7105300-3684-426e-9f90-809ec7b10527": Phase="Pending", Reason="", readiness=false. Elapsed: 23.176068ms
Feb 18 22:16:10.483: INFO: Pod "pod-secrets-c7105300-3684-426e-9f90-809ec7b10527": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036860794s
Feb 18 22:16:12.499: INFO: Pod "pod-secrets-c7105300-3684-426e-9f90-809ec7b10527": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053173519s
Feb 18 22:16:14.596: INFO: Pod "pod-secrets-c7105300-3684-426e-9f90-809ec7b10527": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149967586s
Feb 18 22:16:17.083: INFO: Pod "pod-secrets-c7105300-3684-426e-9f90-809ec7b10527": Phase="Pending", Reason="", readiness=false. Elapsed: 8.636507877s
Feb 18 22:16:19.091: INFO: Pod "pod-secrets-c7105300-3684-426e-9f90-809ec7b10527": Phase="Pending", Reason="", readiness=false. Elapsed: 10.644825947s
Feb 18 22:16:21.099: INFO: Pod "pod-secrets-c7105300-3684-426e-9f90-809ec7b10527": Phase="Pending", Reason="", readiness=false. Elapsed: 12.65270271s
Feb 18 22:16:24.913: INFO: Pod "pod-secrets-c7105300-3684-426e-9f90-809ec7b10527": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.466806092s
STEP: Saw pod success
Feb 18 22:16:24.913: INFO: Pod "pod-secrets-c7105300-3684-426e-9f90-809ec7b10527" satisfied condition "success or failure"
Feb 18 22:16:25.373: INFO: Trying to get logs from node jerma-node pod pod-secrets-c7105300-3684-426e-9f90-809ec7b10527 container secret-volume-test: 
STEP: delete the pod
Feb 18 22:16:26.007: INFO: Waiting for pod pod-secrets-c7105300-3684-426e-9f90-809ec7b10527 to disappear
Feb 18 22:16:26.013: INFO: Pod pod-secrets-c7105300-3684-426e-9f90-809ec7b10527 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:16:26.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5673" for this suite.

• [SLOW TEST:17.828 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2769,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:16:26.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-1b3214df-478d-4b38-aa34-820e15c23eef
STEP: Creating a pod to test consume secrets
Feb 18 22:16:26.213: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ad4c10a4-b0b2-47cf-96a2-0c5dc59d2007" in namespace "projected-6307" to be "success or failure"
Feb 18 22:16:26.224: INFO: Pod "pod-projected-secrets-ad4c10a4-b0b2-47cf-96a2-0c5dc59d2007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.139548ms
Feb 18 22:16:28.263: INFO: Pod "pod-projected-secrets-ad4c10a4-b0b2-47cf-96a2-0c5dc59d2007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050682972s
Feb 18 22:16:30.268: INFO: Pod "pod-projected-secrets-ad4c10a4-b0b2-47cf-96a2-0c5dc59d2007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055534264s
Feb 18 22:16:32.274: INFO: Pod "pod-projected-secrets-ad4c10a4-b0b2-47cf-96a2-0c5dc59d2007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060901263s
Feb 18 22:16:34.295: INFO: Pod "pod-projected-secrets-ad4c10a4-b0b2-47cf-96a2-0c5dc59d2007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081922022s
Feb 18 22:16:36.303: INFO: Pod "pod-projected-secrets-ad4c10a4-b0b2-47cf-96a2-0c5dc59d2007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090137562s
STEP: Saw pod success
Feb 18 22:16:36.303: INFO: Pod "pod-projected-secrets-ad4c10a4-b0b2-47cf-96a2-0c5dc59d2007" satisfied condition "success or failure"
Feb 18 22:16:36.309: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-ad4c10a4-b0b2-47cf-96a2-0c5dc59d2007 container projected-secret-volume-test: 
STEP: delete the pod
Feb 18 22:16:36.399: INFO: Waiting for pod pod-projected-secrets-ad4c10a4-b0b2-47cf-96a2-0c5dc59d2007 to disappear
Feb 18 22:16:36.431: INFO: Pod pod-projected-secrets-ad4c10a4-b0b2-47cf-96a2-0c5dc59d2007 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:16:36.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6307" for this suite.

• [SLOW TEST:10.459 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2769,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:16:36.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Feb 18 22:16:36.619: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix728114100/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:16:36.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-117" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":172,"skipped":2769,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:16:36.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8110.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8110.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8110.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8110.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 18 22:16:49.037: INFO: DNS probes using dns-test-cdfcbd71-c29f-4940-9f28-a72e2ea7586a succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8110.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8110.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8110.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8110.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 18 22:17:03.158: INFO: File wheezy_udp@dns-test-service-3.dns-8110.svc.cluster.local from pod  dns-8110/dns-test-35850a1e-7c7b-49da-b63d-ead56cf2aa2b contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 18 22:17:03.164: INFO: File jessie_udp@dns-test-service-3.dns-8110.svc.cluster.local from pod  dns-8110/dns-test-35850a1e-7c7b-49da-b63d-ead56cf2aa2b contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 18 22:17:03.164: INFO: Lookups using dns-8110/dns-test-35850a1e-7c7b-49da-b63d-ead56cf2aa2b failed for: [wheezy_udp@dns-test-service-3.dns-8110.svc.cluster.local jessie_udp@dns-test-service-3.dns-8110.svc.cluster.local]

Feb 18 22:17:08.174: INFO: File wheezy_udp@dns-test-service-3.dns-8110.svc.cluster.local from pod  dns-8110/dns-test-35850a1e-7c7b-49da-b63d-ead56cf2aa2b contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 18 22:17:08.181: INFO: File jessie_udp@dns-test-service-3.dns-8110.svc.cluster.local from pod  dns-8110/dns-test-35850a1e-7c7b-49da-b63d-ead56cf2aa2b contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 18 22:17:08.181: INFO: Lookups using dns-8110/dns-test-35850a1e-7c7b-49da-b63d-ead56cf2aa2b failed for: [wheezy_udp@dns-test-service-3.dns-8110.svc.cluster.local jessie_udp@dns-test-service-3.dns-8110.svc.cluster.local]

Feb 18 22:17:13.174: INFO: File wheezy_udp@dns-test-service-3.dns-8110.svc.cluster.local from pod  dns-8110/dns-test-35850a1e-7c7b-49da-b63d-ead56cf2aa2b contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 18 22:17:13.181: INFO: File jessie_udp@dns-test-service-3.dns-8110.svc.cluster.local from pod  dns-8110/dns-test-35850a1e-7c7b-49da-b63d-ead56cf2aa2b contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 18 22:17:13.181: INFO: Lookups using dns-8110/dns-test-35850a1e-7c7b-49da-b63d-ead56cf2aa2b failed for: [wheezy_udp@dns-test-service-3.dns-8110.svc.cluster.local jessie_udp@dns-test-service-3.dns-8110.svc.cluster.local]

Feb 18 22:17:18.182: INFO: DNS probes using dns-test-35850a1e-7c7b-49da-b63d-ead56cf2aa2b succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8110.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8110.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8110.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8110.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 18 22:17:32.483: INFO: DNS probes using dns-test-d64e8a44-e14f-41cf-9a47-7575c03ef39d succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:17:32.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8110" for this suite.

• [SLOW TEST:55.965 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":173,"skipped":2777,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:17:32.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-4404
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-4404
STEP: creating replication controller externalsvc in namespace services-4404
I0218 22:17:32.912855       8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4404, replica count: 2
I0218 22:17:35.963633       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:17:38.964060       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:17:41.964935       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:17:44.965412       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:17:47.965833       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Feb 18 22:17:48.013: INFO: Creating new exec pod
Feb 18 22:17:56.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4404 execpodwmm6f -- /bin/sh -x -c nslookup nodeport-service'
Feb 18 22:17:58.966: INFO: stderr: "I0218 22:17:58.690447    2070 log.go:172] (0xc0003d8210) (0xc0006ff680) Create stream\nI0218 22:17:58.690710    2070 log.go:172] (0xc0003d8210) (0xc0006ff680) Stream added, broadcasting: 1\nI0218 22:17:58.697294    2070 log.go:172] (0xc0003d8210) Reply frame received for 1\nI0218 22:17:58.697388    2070 log.go:172] (0xc0003d8210) (0xc000bb2000) Create stream\nI0218 22:17:58.697405    2070 log.go:172] (0xc0003d8210) (0xc000bb2000) Stream added, broadcasting: 3\nI0218 22:17:58.699467    2070 log.go:172] (0xc0003d8210) Reply frame received for 3\nI0218 22:17:58.699543    2070 log.go:172] (0xc0003d8210) (0xc0008ea0a0) Create stream\nI0218 22:17:58.699559    2070 log.go:172] (0xc0003d8210) (0xc0008ea0a0) Stream added, broadcasting: 5\nI0218 22:17:58.701266    2070 log.go:172] (0xc0003d8210) Reply frame received for 5\nI0218 22:17:58.781461    2070 log.go:172] (0xc0003d8210) Data frame received for 5\nI0218 22:17:58.781604    2070 log.go:172] (0xc0008ea0a0) (5) Data frame handling\nI0218 22:17:58.781663    2070 log.go:172] (0xc0008ea0a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0218 22:17:58.854842    2070 log.go:172] (0xc0003d8210) Data frame received for 3\nI0218 22:17:58.855524    2070 log.go:172] (0xc000bb2000) (3) Data frame handling\nI0218 22:17:58.855622    2070 log.go:172] (0xc000bb2000) (3) Data frame sent\nI0218 22:17:58.860206    2070 log.go:172] (0xc0003d8210) Data frame received for 3\nI0218 22:17:58.860249    2070 log.go:172] (0xc000bb2000) (3) Data frame handling\nI0218 22:17:58.860302    2070 log.go:172] (0xc000bb2000) (3) Data frame sent\nI0218 22:17:58.956160    2070 log.go:172] (0xc0003d8210) Data frame received for 1\nI0218 22:17:58.956233    2070 log.go:172] (0xc0006ff680) (1) Data frame handling\nI0218 22:17:58.956256    2070 log.go:172] (0xc0006ff680) (1) Data frame sent\nI0218 22:17:58.958346    2070 log.go:172] (0xc0003d8210) (0xc000bb2000) Stream removed, broadcasting: 3\nI0218 22:17:58.958389    2070 log.go:172] (0xc0003d8210) (0xc0006ff680) Stream removed, broadcasting: 1\nI0218 22:17:58.959470    2070 log.go:172] (0xc0003d8210) (0xc0008ea0a0) Stream removed, broadcasting: 5\nI0218 22:17:58.959515    2070 log.go:172] (0xc0003d8210) Go away received\nI0218 22:17:58.959619    2070 log.go:172] (0xc0003d8210) (0xc0006ff680) Stream removed, broadcasting: 1\nI0218 22:17:58.959679    2070 log.go:172] (0xc0003d8210) (0xc000bb2000) Stream removed, broadcasting: 3\nI0218 22:17:58.959711    2070 log.go:172] (0xc0003d8210) (0xc0008ea0a0) Stream removed, broadcasting: 5\n"
Feb 18 22:17:58.966: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4404.svc.cluster.local\tcanonical name = externalsvc.services-4404.svc.cluster.local.\nName:\texternalsvc.services-4404.svc.cluster.local\nAddress: 10.96.188.166\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-4404, will wait for the garbage collector to delete the pods
Feb 18 22:17:59.034: INFO: Deleting ReplicationController externalsvc took: 9.183999ms
Feb 18 22:17:59.535: INFO: Terminating ReplicationController externalsvc pods took: 501.148469ms
Feb 18 22:18:12.401: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:18:12.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4404" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:39.772 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":174,"skipped":2867,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:18:12.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:18:28.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6605" for this suite.

• [SLOW TEST:16.198 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":175,"skipped":2869,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:18:28.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-a0a71494-f476-4b43-ae0d-a7ac5960d1fe
STEP: Creating configMap with name cm-test-opt-upd-36bf4671-6b1d-4b50-94f6-9859057ff6bc
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-a0a71494-f476-4b43-ae0d-a7ac5960d1fe
STEP: Updating configmap cm-test-opt-upd-36bf4671-6b1d-4b50-94f6-9859057ff6bc
STEP: Creating configMap with name cm-test-opt-create-c3e7c30d-a209-4885-9713-ac2bb222b83d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:19:52.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4779" for this suite.

• [SLOW TEST:83.426 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2878,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:19:52.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 18 22:20:02.757: INFO: Successfully updated pod "pod-update-activedeadlineseconds-4a81719b-fe9d-4432-baab-6a68adc24e9a"
Feb 18 22:20:02.758: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-4a81719b-fe9d-4432-baab-6a68adc24e9a" in namespace "pods-123" to be "terminated due to deadline exceeded"
Feb 18 22:20:02.762: INFO: Pod "pod-update-activedeadlineseconds-4a81719b-fe9d-4432-baab-6a68adc24e9a": Phase="Running", Reason="", readiness=true. Elapsed: 4.425812ms
Feb 18 22:20:04.772: INFO: Pod "pod-update-activedeadlineseconds-4a81719b-fe9d-4432-baab-6a68adc24e9a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.014427351s
Feb 18 22:20:04.772: INFO: Pod "pod-update-activedeadlineseconds-4a81719b-fe9d-4432-baab-6a68adc24e9a" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:20:04.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-123" for this suite.

• [SLOW TEST:12.710 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2885,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:20:04.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-37b023db-e5ec-4702-aab9-4429ddc90722
STEP: Creating a pod to test consume configMaps
Feb 18 22:20:05.122: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9dc8dc7d-7d20-4cc9-a424-f8d8f7c5fae8" in namespace "projected-4295" to be "success or failure"
Feb 18 22:20:05.212: INFO: Pod "pod-projected-configmaps-9dc8dc7d-7d20-4cc9-a424-f8d8f7c5fae8": Phase="Pending", Reason="", readiness=false. Elapsed: 89.68849ms
Feb 18 22:20:07.219: INFO: Pod "pod-projected-configmaps-9dc8dc7d-7d20-4cc9-a424-f8d8f7c5fae8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097000507s
Feb 18 22:20:09.225: INFO: Pod "pod-projected-configmaps-9dc8dc7d-7d20-4cc9-a424-f8d8f7c5fae8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103198837s
Feb 18 22:20:11.232: INFO: Pod "pod-projected-configmaps-9dc8dc7d-7d20-4cc9-a424-f8d8f7c5fae8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110159204s
Feb 18 22:20:13.239: INFO: Pod "pod-projected-configmaps-9dc8dc7d-7d20-4cc9-a424-f8d8f7c5fae8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116728708s
Feb 18 22:20:15.246: INFO: Pod "pod-projected-configmaps-9dc8dc7d-7d20-4cc9-a424-f8d8f7c5fae8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.12409447s
Feb 18 22:20:17.255: INFO: Pod "pod-projected-configmaps-9dc8dc7d-7d20-4cc9-a424-f8d8f7c5fae8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.132850105s
Feb 18 22:20:19.259: INFO: Pod "pod-projected-configmaps-9dc8dc7d-7d20-4cc9-a424-f8d8f7c5fae8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.136938428s
STEP: Saw pod success
Feb 18 22:20:19.259: INFO: Pod "pod-projected-configmaps-9dc8dc7d-7d20-4cc9-a424-f8d8f7c5fae8" satisfied condition "success or failure"
Feb 18 22:20:19.261: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-9dc8dc7d-7d20-4cc9-a424-f8d8f7c5fae8 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 18 22:20:19.298: INFO: Waiting for pod pod-projected-configmaps-9dc8dc7d-7d20-4cc9-a424-f8d8f7c5fae8 to disappear
Feb 18 22:20:19.306: INFO: Pod pod-projected-configmaps-9dc8dc7d-7d20-4cc9-a424-f8d8f7c5fae8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:20:19.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4295" for this suite.

• [SLOW TEST:14.528 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2896,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:20:19.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb 18 22:20:19.530: INFO: Waiting up to 5m0s for pod "downward-api-192851a2-f903-4d33-b83b-c745264daf0c" in namespace "downward-api-1073" to be "success or failure"
Feb 18 22:20:19.549: INFO: Pod "downward-api-192851a2-f903-4d33-b83b-c745264daf0c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.689575ms
Feb 18 22:20:21.555: INFO: Pod "downward-api-192851a2-f903-4d33-b83b-c745264daf0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025075468s
Feb 18 22:20:23.562: INFO: Pod "downward-api-192851a2-f903-4d33-b83b-c745264daf0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032351439s
Feb 18 22:20:26.219: INFO: Pod "downward-api-192851a2-f903-4d33-b83b-c745264daf0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.68920768s
Feb 18 22:20:28.228: INFO: Pod "downward-api-192851a2-f903-4d33-b83b-c745264daf0c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.69758579s
Feb 18 22:20:30.237: INFO: Pod "downward-api-192851a2-f903-4d33-b83b-c745264daf0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.706933807s
STEP: Saw pod success
Feb 18 22:20:30.237: INFO: Pod "downward-api-192851a2-f903-4d33-b83b-c745264daf0c" satisfied condition "success or failure"
Feb 18 22:20:30.241: INFO: Trying to get logs from node jerma-node pod downward-api-192851a2-f903-4d33-b83b-c745264daf0c container dapi-container: 
STEP: delete the pod
Feb 18 22:20:30.279: INFO: Waiting for pod downward-api-192851a2-f903-4d33-b83b-c745264daf0c to disappear
Feb 18 22:20:30.304: INFO: Pod downward-api-192851a2-f903-4d33-b83b-c745264daf0c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:20:30.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1073" for this suite.

• [SLOW TEST:11.010 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2964,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:20:30.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 18 22:20:30.497: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ebe9a13f-bde3-4902-9cac-90610ba942ce" in namespace "projected-1481" to be "success or failure"
Feb 18 22:20:30.508: INFO: Pod "downwardapi-volume-ebe9a13f-bde3-4902-9cac-90610ba942ce": Phase="Pending", Reason="", readiness=false. Elapsed: 10.729727ms
Feb 18 22:20:32.519: INFO: Pod "downwardapi-volume-ebe9a13f-bde3-4902-9cac-90610ba942ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021428555s
Feb 18 22:20:34.528: INFO: Pod "downwardapi-volume-ebe9a13f-bde3-4902-9cac-90610ba942ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03031572s
Feb 18 22:20:36.535: INFO: Pod "downwardapi-volume-ebe9a13f-bde3-4902-9cac-90610ba942ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037570208s
Feb 18 22:20:38.546: INFO: Pod "downwardapi-volume-ebe9a13f-bde3-4902-9cac-90610ba942ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048175735s
STEP: Saw pod success
Feb 18 22:20:38.546: INFO: Pod "downwardapi-volume-ebe9a13f-bde3-4902-9cac-90610ba942ce" satisfied condition "success or failure"
Feb 18 22:20:38.552: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ebe9a13f-bde3-4902-9cac-90610ba942ce container client-container: 
STEP: delete the pod
Feb 18 22:20:38.600: INFO: Waiting for pod downwardapi-volume-ebe9a13f-bde3-4902-9cac-90610ba942ce to disappear
Feb 18 22:20:38.745: INFO: Pod downwardapi-volume-ebe9a13f-bde3-4902-9cac-90610ba942ce no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:20:38.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1481" for this suite.

• [SLOW TEST:8.455 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2978,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:20:38.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-1756
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-1756
I0218 22:20:39.507792       8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-1756, replica count: 2
I0218 22:20:42.559287       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:20:45.560210       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:20:48.561841       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:20:51.562432       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 18 22:20:51.562: INFO: Creating new exec pod
Feb 18 22:21:02.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1756 execpoddqjmt -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Feb 18 22:21:02.987: INFO: stderr: "I0218 22:21:02.797633    2097 log.go:172] (0xc000935130) (0xc000a005a0) Create stream\nI0218 22:21:02.797939    2097 log.go:172] (0xc000935130) (0xc000a005a0) Stream added, broadcasting: 1\nI0218 22:21:02.808763    2097 log.go:172] (0xc000935130) Reply frame received for 1\nI0218 22:21:02.808820    2097 log.go:172] (0xc000935130) (0xc0006c0640) Create stream\nI0218 22:21:02.808832    2097 log.go:172] (0xc000935130) (0xc0006c0640) Stream added, broadcasting: 3\nI0218 22:21:02.809843    2097 log.go:172] (0xc000935130) Reply frame received for 3\nI0218 22:21:02.809866    2097 log.go:172] (0xc000935130) (0xc000455400) Create stream\nI0218 22:21:02.809873    2097 log.go:172] (0xc000935130) (0xc000455400) Stream added, broadcasting: 5\nI0218 22:21:02.811046    2097 log.go:172] (0xc000935130) Reply frame received for 5\nI0218 22:21:02.884045    2097 log.go:172] (0xc000935130) Data frame received for 5\nI0218 22:21:02.884094    2097 log.go:172] (0xc000455400) (5) Data frame handling\nI0218 22:21:02.884110    2097 log.go:172] (0xc000455400) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0218 22:21:02.890690    2097 log.go:172] (0xc000935130) Data frame received for 5\nI0218 22:21:02.890735    2097 log.go:172] (0xc000455400) (5) Data frame handling\nI0218 22:21:02.890754    2097 log.go:172] (0xc000455400) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0218 22:21:02.979372    2097 log.go:172] (0xc000935130) Data frame received for 1\nI0218 22:21:02.979533    2097 log.go:172] (0xc000935130) (0xc0006c0640) Stream removed, broadcasting: 3\nI0218 22:21:02.979766    2097 log.go:172] (0xc000a005a0) (1) Data frame handling\nI0218 22:21:02.979822    2097 log.go:172] (0xc000a005a0) (1) Data frame sent\nI0218 22:21:02.979842    2097 log.go:172] (0xc000935130) (0xc000455400) Stream removed, broadcasting: 5\nI0218 22:21:02.979873    2097 log.go:172] (0xc000935130) (0xc000a005a0) Stream removed, broadcasting: 1\nI0218 22:21:02.979889    2097 log.go:172] (0xc000935130) Go away received\nI0218 22:21:02.980699    2097 log.go:172] (0xc000935130) (0xc000a005a0) Stream removed, broadcasting: 1\nI0218 22:21:02.980710    2097 log.go:172] (0xc000935130) (0xc0006c0640) Stream removed, broadcasting: 3\nI0218 22:21:02.980716    2097 log.go:172] (0xc000935130) (0xc000455400) Stream removed, broadcasting: 5\n"
Feb 18 22:21:02.987: INFO: stdout: ""
Feb 18 22:21:02.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1756 execpoddqjmt -- /bin/sh -x -c nc -zv -t -w 2 10.96.73.3 80'
Feb 18 22:21:03.282: INFO: stderr: "I0218 22:21:03.135638    2117 log.go:172] (0xc0003ac8f0) (0xc0007d79a0) Create stream\nI0218 22:21:03.135720    2117 log.go:172] (0xc0003ac8f0) (0xc0007d79a0) Stream added, broadcasting: 1\nI0218 22:21:03.140035    2117 log.go:172] (0xc0003ac8f0) Reply frame received for 1\nI0218 22:21:03.140129    2117 log.go:172] (0xc0003ac8f0) (0xc0007d7b80) Create stream\nI0218 22:21:03.140143    2117 log.go:172] (0xc0003ac8f0) (0xc0007d7b80) Stream added, broadcasting: 3\nI0218 22:21:03.141284    2117 log.go:172] (0xc0003ac8f0) Reply frame received for 3\nI0218 22:21:03.141307    2117 log.go:172] (0xc0003ac8f0) (0xc000922000) Create stream\nI0218 22:21:03.141320    2117 log.go:172] (0xc0003ac8f0) (0xc000922000) Stream added, broadcasting: 5\nI0218 22:21:03.142424    2117 log.go:172] (0xc0003ac8f0) Reply frame received for 5\nI0218 22:21:03.203131    2117 log.go:172] (0xc0003ac8f0) Data frame received for 5\nI0218 22:21:03.203191    2117 log.go:172] (0xc000922000) (5) Data frame handling\nI0218 22:21:03.203215    2117 log.go:172] (0xc000922000) (5) Data frame sent\nI0218 22:21:03.203224    2117 log.go:172] (0xc0003ac8f0) Data frame received for 5\nI0218 22:21:03.203231    2117 log.go:172] (0xc000922000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.73.3 80\nI0218 22:21:03.203276    2117 log.go:172] (0xc000922000) (5) Data frame sent\nI0218 22:21:03.204008    2117 log.go:172] (0xc0003ac8f0) Data frame received for 5\nI0218 22:21:03.204019    2117 log.go:172] (0xc000922000) (5) Data frame handling\nI0218 22:21:03.204030    2117 log.go:172] (0xc000922000) (5) Data frame sent\nConnection to 10.96.73.3 80 port [tcp/http] succeeded!\nI0218 22:21:03.272767    2117 log.go:172] (0xc0003ac8f0) (0xc0007d7b80) Stream removed, broadcasting: 3\nI0218 22:21:03.272986    2117 log.go:172] (0xc0003ac8f0) Data frame received for 1\nI0218 22:21:03.273012    2117 log.go:172] (0xc0007d79a0) (1) Data frame handling\nI0218 22:21:03.273041    2117 log.go:172] (0xc0007d79a0) (1) Data frame sent\nI0218 22:21:03.273053    2117 log.go:172] (0xc0003ac8f0) (0xc0007d79a0) Stream removed, broadcasting: 1\nI0218 22:21:03.273410    2117 log.go:172] (0xc0003ac8f0) (0xc000922000) Stream removed, broadcasting: 5\nI0218 22:21:03.273528    2117 log.go:172] (0xc0003ac8f0) Go away received\nI0218 22:21:03.274429    2117 log.go:172] (0xc0003ac8f0) (0xc0007d79a0) Stream removed, broadcasting: 1\nI0218 22:21:03.274445    2117 log.go:172] (0xc0003ac8f0) (0xc0007d7b80) Stream removed, broadcasting: 3\nI0218 22:21:03.274465    2117 log.go:172] (0xc0003ac8f0) (0xc000922000) Stream removed, broadcasting: 5\n"
Feb 18 22:21:03.282: INFO: stdout: ""
Feb 18 22:21:03.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1756 execpoddqjmt -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30154'
Feb 18 22:21:03.615: INFO: stderr: "I0218 22:21:03.452854    2137 log.go:172] (0xc000b7e2c0) (0xc0007d8000) Create stream\nI0218 22:21:03.453038    2137 log.go:172] (0xc000b7e2c0) (0xc0007d8000) Stream added, broadcasting: 1\nI0218 22:21:03.459297    2137 log.go:172] (0xc000b7e2c0) Reply frame received for 1\nI0218 22:21:03.459345    2137 log.go:172] (0xc000b7e2c0) (0xc0007d80a0) Create stream\nI0218 22:21:03.459354    2137 log.go:172] (0xc000b7e2c0) (0xc0007d80a0) Stream added, broadcasting: 3\nI0218 22:21:03.460592    2137 log.go:172] (0xc000b7e2c0) Reply frame received for 3\nI0218 22:21:03.460620    2137 log.go:172] (0xc000b7e2c0) (0xc0007d8140) Create stream\nI0218 22:21:03.460635    2137 log.go:172] (0xc000b7e2c0) (0xc0007d8140) Stream added, broadcasting: 5\nI0218 22:21:03.461956    2137 log.go:172] (0xc000b7e2c0) Reply frame received for 5\nI0218 22:21:03.525946    2137 log.go:172] (0xc000b7e2c0) Data frame received for 5\nI0218 22:21:03.525993    2137 log.go:172] (0xc0007d8140) (5) Data frame handling\nI0218 22:21:03.526017    2137 log.go:172] (0xc0007d8140) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30154\nI0218 22:21:03.528450    2137 log.go:172] (0xc000b7e2c0) Data frame received for 5\nI0218 22:21:03.528487    2137 log.go:172] (0xc0007d8140) (5) Data frame handling\nI0218 22:21:03.528514    2137 log.go:172] (0xc0007d8140) (5) Data frame sent\nConnection to 10.96.2.250 30154 port [tcp/30154] succeeded!\nI0218 22:21:03.600247    2137 log.go:172] (0xc000b7e2c0) Data frame received for 1\nI0218 22:21:03.600399    2137 log.go:172] (0xc000b7e2c0) (0xc0007d8140) Stream removed, broadcasting: 5\nI0218 22:21:03.600476    2137 log.go:172] (0xc0007d8000) (1) Data frame handling\nI0218 22:21:03.600497    2137 log.go:172] (0xc0007d8000) (1) Data frame sent\nI0218 22:21:03.600539    2137 log.go:172] (0xc000b7e2c0) (0xc0007d80a0) Stream removed, broadcasting: 3\nI0218 22:21:03.600582    2137 log.go:172] (0xc000b7e2c0) (0xc0007d8000) Stream removed, broadcasting: 1\nI0218 22:21:03.600618    2137 log.go:172] (0xc000b7e2c0) Go away received\nI0218 22:21:03.602579    2137 log.go:172] (0xc000b7e2c0) (0xc0007d8000) Stream removed, broadcasting: 1\nI0218 22:21:03.602620    2137 log.go:172] (0xc000b7e2c0) (0xc0007d80a0) Stream removed, broadcasting: 3\nI0218 22:21:03.602681    2137 log.go:172] (0xc000b7e2c0) (0xc0007d8140) Stream removed, broadcasting: 5\n"
Feb 18 22:21:03.615: INFO: stdout: ""
Feb 18 22:21:03.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1756 execpoddqjmt -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30154'
Feb 18 22:21:04.007: INFO: stderr: "I0218 22:21:03.772745    2157 log.go:172] (0xc000920000) (0xc000a6e000) Create stream\nI0218 22:21:03.773206    2157 log.go:172] (0xc000920000) (0xc000a6e000) Stream added, broadcasting: 1\nI0218 22:21:03.782295    2157 log.go:172] (0xc000920000) Reply frame received for 1\nI0218 22:21:03.782436    2157 log.go:172] (0xc000920000) (0xc000a6e0a0) Create stream\nI0218 22:21:03.782454    2157 log.go:172] (0xc000920000) (0xc000a6e0a0) Stream added, broadcasting: 3\nI0218 22:21:03.784409    2157 log.go:172] (0xc000920000) Reply frame received for 3\nI0218 22:21:03.784493    2157 log.go:172] (0xc000920000) (0xc000567400) Create stream\nI0218 22:21:03.784531    2157 log.go:172] (0xc000920000) (0xc000567400) Stream added, broadcasting: 5\nI0218 22:21:03.787744    2157 log.go:172] (0xc000920000) Reply frame received for 5\nI0218 22:21:03.892725    2157 log.go:172] (0xc000920000) Data frame received for 5\nI0218 22:21:03.893070    2157 log.go:172] (0xc000567400) (5) Data frame handling\nI0218 22:21:03.893129    2157 log.go:172] (0xc000567400) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30154\nI0218 22:21:03.897538    2157 log.go:172] (0xc000920000) Data frame received for 5\nI0218 22:21:03.897555    2157 log.go:172] (0xc000567400) (5) Data frame handling\nI0218 22:21:03.897569    2157 log.go:172] (0xc000567400) (5) Data frame sent\nConnection to 10.96.1.234 30154 port [tcp/30154] succeeded!\nI0218 22:21:03.994104    2157 log.go:172] (0xc000920000) Data frame received for 1\nI0218 22:21:03.994247    2157 log.go:172] (0xc000920000) (0xc000a6e0a0) Stream removed, broadcasting: 3\nI0218 22:21:03.994322    2157 log.go:172] (0xc000a6e000) (1) Data frame handling\nI0218 22:21:03.994346    2157 log.go:172] (0xc000a6e000) (1) Data frame sent\nI0218 22:21:03.994395    2157 log.go:172] (0xc000920000) (0xc000567400) Stream removed, broadcasting: 5\nI0218 22:21:03.994438    2157 log.go:172] (0xc000920000) (0xc000a6e000) Stream removed, broadcasting: 1\nI0218 22:21:03.994460    2157 log.go:172] (0xc000920000) Go away received\nI0218 22:21:03.996393    2157 log.go:172] (0xc000920000) (0xc000a6e000) Stream removed, broadcasting: 1\nI0218 22:21:03.996472    2157 log.go:172] (0xc000920000) (0xc000a6e0a0) Stream removed, broadcasting: 3\nI0218 22:21:03.996527    2157 log.go:172] (0xc000920000) (0xc000567400) Stream removed, broadcasting: 5\n"
Feb 18 22:21:04.007: INFO: stdout: ""
Feb 18 22:21:04.007: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:21:04.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1756" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:25.338 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":181,"skipped":2985,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:21:04.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 22:21:04.769: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 18 22:21:06.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:21:08.809: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:21:10.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:21:13.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:21:15.357: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:21:16.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661264, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 22:21:19.865: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 22:21:19.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5671-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:21:21.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8608" for this suite.
STEP: Destroying namespace "webhook-8608-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:17.320 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":182,"skipped":3003,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:21:21.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Feb 18 22:21:21.673: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 18 22:21:21.717: INFO: Waiting for terminating namespaces to be deleted...
Feb 18 22:21:21.724: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 18 22:21:21.756: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 18 22:21:21.756: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 18 22:21:21.756: INFO: sample-webhook-deployment-5f65f8c764-tl9hp from webhook-8608 started at 2020-02-18 22:21:04 +0000 UTC (1 container statuses recorded)
Feb 18 22:21:21.756: INFO: 	Container sample-webhook ready: true, restart count 0
Feb 18 22:21:21.756: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 18 22:21:21.756: INFO: 	Container weave ready: true, restart count 1
Feb 18 22:21:21.756: INFO: 	Container weave-npc ready: true, restart count 0
Feb 18 22:21:21.756: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 18 22:21:21.812: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 18 22:21:21.812: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 18 22:21:21.812: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 18 22:21:21.812: INFO: 	Container etcd ready: true, restart count 1
Feb 18 22:21:21.812: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 18 22:21:21.812: INFO: 	Container coredns ready: true, restart count 0
Feb 18 22:21:21.812: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 18 22:21:21.812: INFO: 	Container coredns ready: true, restart count 0
Feb 18 22:21:21.812: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 18 22:21:21.812: INFO: 	Container kube-controller-manager ready: true, restart count 14
Feb 18 22:21:21.812: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 18 22:21:21.812: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 18 22:21:21.812: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 18 22:21:21.812: INFO: 	Container weave ready: true, restart count 0
Feb 18 22:21:21.812: INFO: 	Container weave-npc ready: true, restart count 0
Feb 18 22:21:21.812: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 18 22:21:21.812: INFO: 	Container kube-scheduler ready: true, restart count 18
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-109a1034-7775-47f1-bec8-6a196816de75 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-109a1034-7775-47f1-bec8-6a196816de75 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-109a1034-7775-47f1-bec8-6a196816de75
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:21:42.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6581" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:20.736 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":183,"skipped":3020,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:21:42.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-4179/secret-test-f6e28541-a95b-4cb1-8771-8d9dddeca316
STEP: Creating a pod to test consume secrets
Feb 18 22:21:42.295: INFO: Waiting up to 5m0s for pod "pod-configmaps-b64ae672-e270-45be-aa56-2795ff433433" in namespace "secrets-4179" to be "success or failure"
Feb 18 22:21:42.298: INFO: Pod "pod-configmaps-b64ae672-e270-45be-aa56-2795ff433433": Phase="Pending", Reason="", readiness=false. Elapsed: 3.205219ms
Feb 18 22:21:44.312: INFO: Pod "pod-configmaps-b64ae672-e270-45be-aa56-2795ff433433": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017158018s
Feb 18 22:21:46.375: INFO: Pod "pod-configmaps-b64ae672-e270-45be-aa56-2795ff433433": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079496205s
Feb 18 22:21:48.379: INFO: Pod "pod-configmaps-b64ae672-e270-45be-aa56-2795ff433433": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084132823s
Feb 18 22:21:50.385: INFO: Pod "pod-configmaps-b64ae672-e270-45be-aa56-2795ff433433": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090279817s
Feb 18 22:21:52.397: INFO: Pod "pod-configmaps-b64ae672-e270-45be-aa56-2795ff433433": Phase="Pending", Reason="", readiness=false. Elapsed: 10.102401853s
Feb 18 22:21:54.407: INFO: Pod "pod-configmaps-b64ae672-e270-45be-aa56-2795ff433433": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.112366883s
STEP: Saw pod success
Feb 18 22:21:54.408: INFO: Pod "pod-configmaps-b64ae672-e270-45be-aa56-2795ff433433" satisfied condition "success or failure"
Feb 18 22:21:54.412: INFO: Trying to get logs from node jerma-node pod pod-configmaps-b64ae672-e270-45be-aa56-2795ff433433 container env-test: 
STEP: delete the pod
Feb 18 22:21:54.462: INFO: Waiting for pod pod-configmaps-b64ae672-e270-45be-aa56-2795ff433433 to disappear
Feb 18 22:21:54.474: INFO: Pod pod-configmaps-b64ae672-e270-45be-aa56-2795ff433433 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:21:54.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4179" for this suite.

• [SLOW TEST:12.299 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":3039,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:21:54.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:22:01.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2476" for this suite.

• [SLOW TEST:7.253 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":185,"skipped":3045,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:22:01.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-9f108ada-dcb6-4f37-a9e6-6c5a557d5737
STEP: Creating a pod to test consume configMaps
Feb 18 22:22:01.916: INFO: Waiting up to 5m0s for pod "pod-configmaps-00a57074-ffcc-49ee-a6e9-56463686c94a" in namespace "configmap-3896" to be "success or failure"
Feb 18 22:22:01.921: INFO: Pod "pod-configmaps-00a57074-ffcc-49ee-a6e9-56463686c94a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.866342ms
Feb 18 22:22:03.930: INFO: Pod "pod-configmaps-00a57074-ffcc-49ee-a6e9-56463686c94a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014491631s
Feb 18 22:22:05.937: INFO: Pod "pod-configmaps-00a57074-ffcc-49ee-a6e9-56463686c94a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021341815s
Feb 18 22:22:07.949: INFO: Pod "pod-configmaps-00a57074-ffcc-49ee-a6e9-56463686c94a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033787624s
Feb 18 22:22:09.956: INFO: Pod "pod-configmaps-00a57074-ffcc-49ee-a6e9-56463686c94a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040300359s
STEP: Saw pod success
Feb 18 22:22:09.956: INFO: Pod "pod-configmaps-00a57074-ffcc-49ee-a6e9-56463686c94a" satisfied condition "success or failure"
Feb 18 22:22:09.960: INFO: Trying to get logs from node jerma-node pod pod-configmaps-00a57074-ffcc-49ee-a6e9-56463686c94a container configmap-volume-test: 
STEP: delete the pod
Feb 18 22:22:10.156: INFO: Waiting for pod pod-configmaps-00a57074-ffcc-49ee-a6e9-56463686c94a to disappear
Feb 18 22:22:10.168: INFO: Pod pod-configmaps-00a57074-ffcc-49ee-a6e9-56463686c94a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:22:10.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3896" for this suite.

• [SLOW TEST:8.438 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3125,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:22:10.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Feb 18 22:22:10.433: INFO: PodSpec: initContainers in spec.initContainers
Feb 18 22:23:05.339: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f2fa9d2f-c4f1-41c0-a9c9-fa666ed0cdbf", GenerateName:"", Namespace:"init-container-1089", SelfLink:"/api/v1/namespaces/init-container-1089/pods/pod-init-f2fa9d2f-c4f1-41c0-a9c9-fa666ed0cdbf", UID:"239a0f8e-223b-4df9-b2f5-9863e261eeaf", ResourceVersion:"9278836", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717661330, loc:(*time.Location)(0x7d100a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"433299802"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-bwjkr", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc006430000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bwjkr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bwjkr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bwjkr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0040f8068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc006072000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0040f8120)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0040f8140)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0040f8148), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0040f814c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661332, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661332, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661332, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717661330, loc:(*time.Location)(0x7d100a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc002b5e040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00297a0e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00297a150)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://3390411b784014a2b64cfe5b9f0d1e687602531aa5950127d08d2ee7a344c519", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b5e080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b5e060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0040f81df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:23:05.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1089" for this suite.

• [SLOW TEST:55.528 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":187,"skipped":3133,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:23:05.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6553
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-6553
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6553
Feb 18 22:23:05.909: INFO: Found 0 stateful pods, waiting for 1
Feb 18 22:23:15.923: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Feb 18 22:23:25.916: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 18 22:23:25.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 18 22:23:26.248: INFO: stderr: "I0218 22:23:26.067988    2176 log.go:172] (0xc00010b6b0) (0xc0009b4820) Create stream\nI0218 22:23:26.068175    2176 log.go:172] (0xc00010b6b0) (0xc0009b4820) Stream added, broadcasting: 1\nI0218 22:23:26.071135    2176 log.go:172] (0xc00010b6b0) Reply frame received for 1\nI0218 22:23:26.071161    2176 log.go:172] (0xc00010b6b0) (0xc0009c2500) Create stream\nI0218 22:23:26.071174    2176 log.go:172] (0xc00010b6b0) (0xc0009c2500) Stream added, broadcasting: 3\nI0218 22:23:26.072251    2176 log.go:172] (0xc00010b6b0) Reply frame received for 3\nI0218 22:23:26.072271    2176 log.go:172] (0xc00010b6b0) (0xc0009c4780) Create stream\nI0218 22:23:26.072276    2176 log.go:172] (0xc00010b6b0) (0xc0009c4780) Stream added, broadcasting: 5\nI0218 22:23:26.073282    2176 log.go:172] (0xc00010b6b0) Reply frame received for 5\nI0218 22:23:26.130086    2176 log.go:172] (0xc00010b6b0) Data frame received for 5\nI0218 22:23:26.130842    2176 log.go:172] (0xc0009c4780) (5) Data frame handling\nI0218 22:23:26.130937    2176 log.go:172] (0xc0009c4780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0218 22:23:26.157915    2176 log.go:172] (0xc00010b6b0) Data frame received for 3\nI0218 22:23:26.157963    2176 log.go:172] (0xc0009c2500) (3) Data frame handling\nI0218 22:23:26.157991    2176 log.go:172] (0xc0009c2500) (3) Data frame sent\nI0218 22:23:26.235852    2176 log.go:172] (0xc00010b6b0) Data frame received for 1\nI0218 22:23:26.236291    2176 log.go:172] (0xc00010b6b0) (0xc0009c2500) Stream removed, broadcasting: 3\nI0218 22:23:26.236385    2176 log.go:172] (0xc0009b4820) (1) Data frame handling\nI0218 22:23:26.236442    2176 log.go:172] (0xc0009b4820) (1) Data frame sent\nI0218 22:23:26.236512    2176 log.go:172] (0xc00010b6b0) (0xc0009c4780) Stream removed, broadcasting: 5\nI0218 22:23:26.236579    2176 log.go:172] (0xc00010b6b0) (0xc0009b4820) Stream removed, broadcasting: 1\nI0218 22:23:26.236627    2176 log.go:172] (0xc00010b6b0) Go away received\nI0218 22:23:26.239171    2176 log.go:172] (0xc00010b6b0) (0xc0009b4820) Stream removed, broadcasting: 1\nI0218 22:23:26.239192    2176 log.go:172] (0xc00010b6b0) (0xc0009c2500) Stream removed, broadcasting: 3\nI0218 22:23:26.239207    2176 log.go:172] (0xc00010b6b0) (0xc0009c4780) Stream removed, broadcasting: 5\n"
Feb 18 22:23:26.248: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 18 22:23:26.248: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 18 22:23:26.252: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 18 22:23:36.259: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 18 22:23:36.259: INFO: Waiting for statefulset status.replicas updated to 0
Feb 18 22:23:36.288: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 18 22:23:36.288: INFO: ss-0  jerma-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  }]
Feb 18 22:23:36.288: INFO: 
Feb 18 22:23:36.288: INFO: StatefulSet ss has not reached scale 3, at 1
Feb 18 22:23:38.244: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991683308s
Feb 18 22:23:39.343: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.036019947s
Feb 18 22:23:40.359: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.937103086s
Feb 18 22:23:41.366: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.921069825s
Feb 18 22:23:42.678: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.914240178s
Feb 18 22:23:44.351: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.602214941s
Feb 18 22:23:45.357: INFO: Verifying statefulset ss doesn't scale past 3 for another 929.128161ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6553
Feb 18 22:23:46.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:23:46.813: INFO: stderr: "I0218 22:23:46.643511    2195 log.go:172] (0xc0009e4c60) (0xc0009b2320) Create stream\nI0218 22:23:46.643819    2195 log.go:172] (0xc0009e4c60) (0xc0009b2320) Stream added, broadcasting: 1\nI0218 22:23:46.648697    2195 log.go:172] (0xc0009e4c60) Reply frame received for 1\nI0218 22:23:46.648769    2195 log.go:172] (0xc0009e4c60) (0xc00096e320) Create stream\nI0218 22:23:46.648778    2195 log.go:172] (0xc0009e4c60) (0xc00096e320) Stream added, broadcasting: 3\nI0218 22:23:46.650130    2195 log.go:172] (0xc0009e4c60) Reply frame received for 3\nI0218 22:23:46.650170    2195 log.go:172] (0xc0009e4c60) (0xc00095a000) Create stream\nI0218 22:23:46.650223    2195 log.go:172] (0xc0009e4c60) (0xc00095a000) Stream added, broadcasting: 5\nI0218 22:23:46.651719    2195 log.go:172] (0xc0009e4c60) Reply frame received for 5\nI0218 22:23:46.719846    2195 log.go:172] (0xc0009e4c60) Data frame received for 5\nI0218 22:23:46.719936    2195 log.go:172] (0xc00095a000) (5) Data frame handling\nI0218 22:23:46.719959    2195 log.go:172] (0xc00095a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0218 22:23:46.719984    2195 log.go:172] (0xc0009e4c60) Data frame received for 3\nI0218 22:23:46.719993    2195 log.go:172] (0xc00096e320) (3) Data frame handling\nI0218 22:23:46.720012    2195 log.go:172] (0xc00096e320) (3) Data frame sent\nI0218 22:23:46.800921    2195 log.go:172] (0xc0009e4c60) (0xc00096e320) Stream removed, broadcasting: 3\nI0218 22:23:46.801007    2195 log.go:172] (0xc0009e4c60) Data frame received for 1\nI0218 22:23:46.801027    2195 log.go:172] (0xc0009e4c60) (0xc00095a000) Stream removed, broadcasting: 5\nI0218 22:23:46.801073    2195 log.go:172] (0xc0009b2320) (1) Data frame handling\nI0218 22:23:46.801094    2195 log.go:172] (0xc0009b2320) (1) Data frame sent\nI0218 22:23:46.801106    2195 log.go:172] (0xc0009e4c60) (0xc0009b2320) Stream removed, broadcasting: 1\nI0218 22:23:46.801145    2195 log.go:172] (0xc0009e4c60) Go away received\nI0218 22:23:46.801980    2195 log.go:172] (0xc0009e4c60) (0xc0009b2320) Stream removed, broadcasting: 1\nI0218 22:23:46.801996    2195 log.go:172] (0xc0009e4c60) (0xc00096e320) Stream removed, broadcasting: 3\nI0218 22:23:46.802002    2195 log.go:172] (0xc0009e4c60) (0xc00095a000) Stream removed, broadcasting: 5\n"
Feb 18 22:23:46.813: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 18 22:23:46.813: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 18 22:23:46.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:23:47.252: INFO: stderr: "I0218 22:23:47.082520    2217 log.go:172] (0xc00056b130) (0xc0006cbea0) Create stream\nI0218 22:23:47.082761    2217 log.go:172] (0xc00056b130) (0xc0006cbea0) Stream added, broadcasting: 1\nI0218 22:23:47.088450    2217 log.go:172] (0xc00056b130) Reply frame received for 1\nI0218 22:23:47.088587    2217 log.go:172] (0xc00056b130) (0xc000632780) Create stream\nI0218 22:23:47.088601    2217 log.go:172] (0xc00056b130) (0xc000632780) Stream added, broadcasting: 3\nI0218 22:23:47.089696    2217 log.go:172] (0xc00056b130) Reply frame received for 3\nI0218 22:23:47.089726    2217 log.go:172] (0xc00056b130) (0xc0006cbf40) Create stream\nI0218 22:23:47.089744    2217 log.go:172] (0xc00056b130) (0xc0006cbf40) Stream added, broadcasting: 5\nI0218 22:23:47.093433    2217 log.go:172] (0xc00056b130) Reply frame received for 5\nI0218 22:23:47.157735    2217 log.go:172] (0xc00056b130) Data frame received for 5\nI0218 22:23:47.157803    2217 log.go:172] (0xc0006cbf40) (5) Data frame handling\nI0218 22:23:47.157827    2217 log.go:172] (0xc0006cbf40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0218 22:23:47.157857    2217 log.go:172] (0xc00056b130) Data frame received for 3\nI0218 22:23:47.157873    2217 log.go:172] (0xc000632780) (3) Data frame handling\nI0218 22:23:47.157892    2217 log.go:172] (0xc000632780) (3) Data frame sent\nI0218 22:23:47.243690    2217 log.go:172] (0xc00056b130) Data frame received for 1\nI0218 22:23:47.243789    2217 log.go:172] (0xc00056b130) (0xc0006cbf40) Stream removed, broadcasting: 5\nI0218 22:23:47.243869    2217 log.go:172] (0xc0006cbea0) (1) Data frame handling\nI0218 22:23:47.243908    2217 log.go:172] (0xc0006cbea0) (1) Data frame sent\nI0218 22:23:47.243994    2217 log.go:172] (0xc00056b130) (0xc000632780) Stream removed, broadcasting: 3\nI0218 22:23:47.244032    2217 log.go:172] (0xc00056b130) (0xc0006cbea0) Stream removed, broadcasting: 1\nI0218 22:23:47.244051    2217 log.go:172] (0xc00056b130) Go away received\nI0218 22:23:47.245042    2217 log.go:172] (0xc00056b130) (0xc0006cbea0) Stream removed, broadcasting: 1\nI0218 22:23:47.245064    2217 log.go:172] (0xc00056b130) (0xc000632780) Stream removed, broadcasting: 3\nI0218 22:23:47.245076    2217 log.go:172] (0xc00056b130) (0xc0006cbf40) Stream removed, broadcasting: 5\n"
Feb 18 22:23:47.252: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 18 22:23:47.252: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 18 22:23:47.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:23:47.582: INFO: stderr: "I0218 22:23:47.409698    2238 log.go:172] (0xc00044a000) (0xc00070d5e0) Create stream\nI0218 22:23:47.409848    2238 log.go:172] (0xc00044a000) (0xc00070d5e0) Stream added, broadcasting: 1\nI0218 22:23:47.419620    2238 log.go:172] (0xc00044a000) Reply frame received for 1\nI0218 22:23:47.419718    2238 log.go:172] (0xc00044a000) (0xc00062fb80) Create stream\nI0218 22:23:47.419731    2238 log.go:172] (0xc00044a000) (0xc00062fb80) Stream added, broadcasting: 3\nI0218 22:23:47.421426    2238 log.go:172] (0xc00044a000) Reply frame received for 3\nI0218 22:23:47.421451    2238 log.go:172] (0xc00044a000) (0xc000926000) Create stream\nI0218 22:23:47.421458    2238 log.go:172] (0xc00044a000) (0xc000926000) Stream added, broadcasting: 5\nI0218 22:23:47.422880    2238 log.go:172] (0xc00044a000) Reply frame received for 5\nI0218 22:23:47.493739    2238 log.go:172] (0xc00044a000) Data frame received for 5\nI0218 22:23:47.493942    2238 log.go:172] (0xc000926000) (5) Data frame handling\nI0218 22:23:47.493994    2238 log.go:172] (0xc000926000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0218 22:23:47.494655    2238 log.go:172] (0xc00044a000) Data frame received for 3\nI0218 22:23:47.494672    2238 log.go:172] (0xc00062fb80) (3) Data frame handling\nI0218 22:23:47.494691    2238 log.go:172] (0xc00062fb80) (3) Data frame sent\nI0218 22:23:47.503566    2238 log.go:172] (0xc00044a000) Data frame received for 5\nI0218 22:23:47.503597    2238 log.go:172] (0xc000926000) (5) Data frame handling\nI0218 22:23:47.503624    2238 log.go:172] (0xc000926000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0218 22:23:47.571842    2238 log.go:172] (0xc00044a000) (0xc00062fb80) Stream removed, broadcasting: 3\nI0218 22:23:47.572279    2238 log.go:172] (0xc00044a000) Data frame received for 1\nI0218 22:23:47.572346    2238 log.go:172] (0xc00044a000) (0xc000926000) Stream removed, broadcasting: 5\nI0218 22:23:47.572510    2238 log.go:172] (0xc00070d5e0) (1) Data frame handling\nI0218 22:23:47.572552    2238 log.go:172] (0xc00070d5e0) (1) Data frame sent\nI0218 22:23:47.572592    2238 log.go:172] (0xc00044a000) (0xc00070d5e0) Stream removed, broadcasting: 1\nI0218 22:23:47.572739    2238 log.go:172] (0xc00044a000) Go away received\nI0218 22:23:47.574014    2238 log.go:172] (0xc00044a000) (0xc00070d5e0) Stream removed, broadcasting: 1\nI0218 22:23:47.574051    2238 log.go:172] (0xc00044a000) (0xc00062fb80) Stream removed, broadcasting: 3\nI0218 22:23:47.574068    2238 log.go:172] (0xc00044a000) (0xc000926000) Stream removed, broadcasting: 5\n"
Feb 18 22:23:47.583: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 18 22:23:47.583: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 18 22:23:47.591: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 22:23:47.592: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 22:23:47.592: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 18 22:23:47.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 18 22:23:47.957: INFO: stderr: "I0218 22:23:47.756452    2260 log.go:172] (0xc0000f42c0) (0xc00068e640) Create stream\nI0218 22:23:47.756675    2260 log.go:172] (0xc0000f42c0) (0xc00068e640) Stream added, broadcasting: 1\nI0218 22:23:47.759749    2260 log.go:172] (0xc0000f42c0) Reply frame received for 1\nI0218 22:23:47.759784    2260 log.go:172] (0xc0000f42c0) (0xc0005a9400) Create stream\nI0218 22:23:47.759796    2260 log.go:172] (0xc0000f42c0) (0xc0005a9400) Stream added, broadcasting: 3\nI0218 22:23:47.760645    2260 log.go:172] (0xc0000f42c0) Reply frame received for 3\nI0218 22:23:47.760661    2260 log.go:172] (0xc0000f42c0) (0xc00080fd60) Create stream\nI0218 22:23:47.760666    2260 log.go:172] (0xc0000f42c0) (0xc00080fd60) Stream added, broadcasting: 5\nI0218 22:23:47.761550    2260 log.go:172] (0xc0000f42c0) Reply frame received for 5\nI0218 22:23:47.829905    2260 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0218 22:23:47.829961    2260 log.go:172] (0xc00080fd60) (5) Data frame handling\nI0218 22:23:47.829973    2260 log.go:172] (0xc00080fd60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0218 22:23:47.829995    2260 log.go:172] (0xc0000f42c0) Data frame received for 3\nI0218 22:23:47.830000    2260 log.go:172] (0xc0005a9400) (3) Data frame handling\nI0218 22:23:47.830008    2260 log.go:172] (0xc0005a9400) (3) Data frame sent\nI0218 22:23:47.931400    2260 log.go:172] (0xc0000f42c0) Data frame received for 1\nI0218 22:23:47.931571    2260 log.go:172] (0xc0000f42c0) (0xc00080fd60) Stream removed, broadcasting: 5\nI0218 22:23:47.931701    2260 log.go:172] (0xc00068e640) (1) Data frame handling\nI0218 22:23:47.931742    2260 log.go:172] (0xc00068e640) (1) Data frame sent\nI0218 22:23:47.931831    2260 log.go:172] (0xc0000f42c0) (0xc0005a9400) Stream removed, broadcasting: 3\nI0218 22:23:47.931897    2260 log.go:172] (0xc0000f42c0) (0xc00068e640) Stream removed, broadcasting: 1\nI0218 22:23:47.931912    2260 log.go:172] (0xc0000f42c0) Go away received\nI0218 22:23:47.933342    2260 log.go:172] (0xc0000f42c0) (0xc00068e640) Stream removed, broadcasting: 1\nI0218 22:23:47.933356    2260 log.go:172] (0xc0000f42c0) (0xc0005a9400) Stream removed, broadcasting: 3\nI0218 22:23:47.933362    2260 log.go:172] (0xc0000f42c0) (0xc00080fd60) Stream removed, broadcasting: 5\n"
Feb 18 22:23:47.957: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 18 22:23:47.957: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 18 22:23:47.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 18 22:23:48.592: INFO: stderr: "I0218 22:23:48.345269    2281 log.go:172] (0xc000a89760) (0xc000958a00) Create stream\nI0218 22:23:48.345446    2281 log.go:172] (0xc000a89760) (0xc000958a00) Stream added, broadcasting: 1\nI0218 22:23:48.351741    2281 log.go:172] (0xc000a89760) Reply frame received for 1\nI0218 22:23:48.351787    2281 log.go:172] (0xc000a89760) (0xc0005b05a0) Create stream\nI0218 22:23:48.351801    2281 log.go:172] (0xc000a89760) (0xc0005b05a0) Stream added, broadcasting: 3\nI0218 22:23:48.353210    2281 log.go:172] (0xc000a89760) Reply frame received for 3\nI0218 22:23:48.353233    2281 log.go:172] (0xc000a89760) (0xc00047d360) Create stream\nI0218 22:23:48.353244    2281 log.go:172] (0xc000a89760) (0xc00047d360) Stream added, broadcasting: 5\nI0218 22:23:48.354995    2281 log.go:172] (0xc000a89760) Reply frame received for 5\nI0218 22:23:48.436267    2281 log.go:172] (0xc000a89760) Data frame received for 5\nI0218 22:23:48.436437    2281 log.go:172] (0xc00047d360) (5) Data frame handling\nI0218 22:23:48.436494    2281 log.go:172] (0xc00047d360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0218 22:23:48.462187    2281 log.go:172] (0xc000a89760) Data frame received for 3\nI0218 22:23:48.462264    2281 log.go:172] (0xc0005b05a0) (3) Data frame handling\nI0218 22:23:48.462309    2281 log.go:172] (0xc0005b05a0) (3) Data frame sent\nI0218 22:23:48.580521    2281 log.go:172] (0xc000a89760) Data frame received for 1\nI0218 22:23:48.580750    2281 log.go:172] (0xc000a89760) (0xc0005b05a0) Stream removed, broadcasting: 3\nI0218 22:23:48.580857    2281 log.go:172] (0xc000958a00) (1) Data frame handling\nI0218 22:23:48.580908    2281 log.go:172] (0xc000958a00) (1) Data frame sent\nI0218 22:23:48.580952    2281 log.go:172] (0xc000a89760) (0xc00047d360) Stream removed, broadcasting: 5\nI0218 22:23:48.581022    2281 log.go:172] (0xc000a89760) (0xc000958a00) Stream removed, broadcasting: 1\nI0218 22:23:48.581046    2281 log.go:172] (0xc000a89760) Go away received\nI0218 22:23:48.582727    2281 log.go:172] (0xc000a89760) (0xc000958a00) Stream removed, broadcasting: 1\nI0218 22:23:48.582746    2281 log.go:172] (0xc000a89760) (0xc0005b05a0) Stream removed, broadcasting: 3\nI0218 22:23:48.582750    2281 log.go:172] (0xc000a89760) (0xc00047d360) Stream removed, broadcasting: 5\n"
Feb 18 22:23:48.593: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 18 22:23:48.593: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 18 22:23:48.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 18 22:23:49.080: INFO: stderr: "I0218 22:23:48.843070    2301 log.go:172] (0xc0003f3290) (0xc0006bbae0) Create stream\nI0218 22:23:48.843325    2301 log.go:172] (0xc0003f3290) (0xc0006bbae0) Stream added, broadcasting: 1\nI0218 22:23:48.848029    2301 log.go:172] (0xc0003f3290) Reply frame received for 1\nI0218 22:23:48.848128    2301 log.go:172] (0xc0003f3290) (0xc000966000) Create stream\nI0218 22:23:48.848150    2301 log.go:172] (0xc0003f3290) (0xc000966000) Stream added, broadcasting: 3\nI0218 22:23:48.852866    2301 log.go:172] (0xc0003f3290) Reply frame received for 3\nI0218 22:23:48.852973    2301 log.go:172] (0xc0003f3290) (0xc00030a000) Create stream\nI0218 22:23:48.852995    2301 log.go:172] (0xc0003f3290) (0xc00030a000) Stream added, broadcasting: 5\nI0218 22:23:48.854562    2301 log.go:172] (0xc0003f3290) Reply frame received for 5\nI0218 22:23:48.937015    2301 log.go:172] (0xc0003f3290) Data frame received for 5\nI0218 22:23:48.937074    2301 log.go:172] (0xc00030a000) (5) Data frame handling\nI0218 22:23:48.937095    2301 log.go:172] (0xc00030a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0218 22:23:48.989946    2301 log.go:172] (0xc0003f3290) Data frame received for 3\nI0218 22:23:48.990101    2301 log.go:172] (0xc000966000) (3) Data frame handling\nI0218 22:23:48.990167    2301 log.go:172] (0xc000966000) (3) Data frame sent\nI0218 22:23:49.068141    2301 log.go:172] (0xc0003f3290) (0xc000966000) Stream removed, broadcasting: 3\nI0218 22:23:49.068489    2301 log.go:172] (0xc0003f3290) Data frame received for 1\nI0218 22:23:49.068560    2301 log.go:172] (0xc0003f3290) (0xc00030a000) Stream removed, broadcasting: 5\nI0218 22:23:49.068671    2301 log.go:172] (0xc0006bbae0) (1) Data frame handling\nI0218 22:23:49.068701    2301 log.go:172] (0xc0006bbae0) (1) Data frame sent\nI0218 22:23:49.068714    2301 log.go:172] (0xc0003f3290) (0xc0006bbae0) Stream removed, broadcasting: 1\nI0218 22:23:49.068732    2301 log.go:172] (0xc0003f3290) Go away received\nI0218 22:23:49.070432    2301 log.go:172] (0xc0003f3290) (0xc0006bbae0) Stream removed, broadcasting: 1\nI0218 22:23:49.070451    2301 log.go:172] (0xc0003f3290) (0xc000966000) Stream removed, broadcasting: 3\nI0218 22:23:49.070457    2301 log.go:172] (0xc0003f3290) (0xc00030a000) Stream removed, broadcasting: 5\n"
Feb 18 22:23:49.080: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 18 22:23:49.080: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 18 22:23:49.080: INFO: Waiting for statefulset status.replicas updated to 0
Feb 18 22:23:49.085: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 18 22:23:59.099: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 18 22:23:59.099: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 18 22:23:59.099: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 18 22:23:59.114: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 18 22:23:59.114: INFO: ss-0  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  }]
Feb 18 22:23:59.114: INFO: ss-1  jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  }]
Feb 18 22:23:59.114: INFO: ss-2  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  }]
Feb 18 22:23:59.114: INFO: 
Feb 18 22:23:59.114: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 18 22:24:00.747: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 18 22:24:00.747: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  }]
Feb 18 22:24:00.747: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  }]
Feb 18 22:24:00.747: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  }]
Feb 18 22:24:00.747: INFO: 
Feb 18 22:24:00.747: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 18 22:24:01.754: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 18 22:24:01.754: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  }]
Feb 18 22:24:01.755: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  }]
Feb 18 22:24:01.755: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  }]
Feb 18 22:24:01.755: INFO: 
Feb 18 22:24:01.755: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 18 22:24:02.762: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 18 22:24:02.762: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  }]
Feb 18 22:24:02.762: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  }]
Feb 18 22:24:02.762: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  }]
Feb 18 22:24:02.762: INFO: 
Feb 18 22:24:02.762: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 18 22:24:04.201: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 18 22:24:04.201: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  }]
Feb 18 22:24:04.201: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  }]
Feb 18 22:24:04.201: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  }]
Feb 18 22:24:04.201: INFO: 
Feb 18 22:24:04.201: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 18 22:24:05.208: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 18 22:24:05.208: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  }]
Feb 18 22:24:05.208: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  }]
Feb 18 22:24:05.208: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  }]
Feb 18 22:24:05.208: INFO: 
Feb 18 22:24:05.208: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 18 22:24:06.232: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 18 22:24:06.232: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  }]
Feb 18 22:24:06.232: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  }]
Feb 18 22:24:06.232: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  }]
Feb 18 22:24:06.232: INFO: 
Feb 18 22:24:06.232: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 18 22:24:07.244: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 18 22:24:07.244: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  }]
Feb 18 22:24:07.244: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  }]
Feb 18 22:24:07.244: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  }]
Feb 18 22:24:07.244: INFO: 
Feb 18 22:24:07.244: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 18 22:24:08.255: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 18 22:24:08.255: INFO: ss-0  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:06 +0000 UTC  }]
Feb 18 22:24:08.256: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  }]
Feb 18 22:24:08.256: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 22:23:36 +0000 UTC  }]
Feb 18 22:24:08.256: INFO: 
Feb 18 22:24:08.256: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6553
Feb 18 22:24:09.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:24:09.519: INFO: rc: 1
Feb 18 22:24:09.519: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Feb 18 22:24:19.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:24:19.654: INFO: rc: 1
Feb 18 22:24:19.655: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:24:29.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:24:29.872: INFO: rc: 1
Feb 18 22:24:29.872: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:24:39.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:24:40.081: INFO: rc: 1
Feb 18 22:24:40.082: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:24:50.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:24:50.214: INFO: rc: 1
Feb 18 22:24:50.214: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:25:00.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:25:00.388: INFO: rc: 1
Feb 18 22:25:00.388: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:25:10.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:25:10.571: INFO: rc: 1
Feb 18 22:25:10.571: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:25:20.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:25:20.689: INFO: rc: 1
Feb 18 22:25:20.689: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:25:30.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:25:30.869: INFO: rc: 1
Feb 18 22:25:30.869: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:25:40.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:25:41.050: INFO: rc: 1
Feb 18 22:25:41.050: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:25:51.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:25:51.261: INFO: rc: 1
Feb 18 22:25:51.261: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:26:01.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:26:01.498: INFO: rc: 1
Feb 18 22:26:01.498: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:26:11.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:26:11.695: INFO: rc: 1
Feb 18 22:26:11.695: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:26:21.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:26:21.873: INFO: rc: 1
Feb 18 22:26:21.874: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:26:31.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:26:32.087: INFO: rc: 1
Feb 18 22:26:32.087: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:26:42.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:26:42.284: INFO: rc: 1
Feb 18 22:26:42.284: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:26:52.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:26:52.489: INFO: rc: 1
Feb 18 22:26:52.490: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:27:02.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:27:02.664: INFO: rc: 1
Feb 18 22:27:02.664: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:27:12.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:27:12.807: INFO: rc: 1
Feb 18 22:27:12.807: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:27:22.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:27:23.121: INFO: rc: 1
Feb 18 22:27:23.121: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:27:33.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:27:33.298: INFO: rc: 1
Feb 18 22:27:33.298: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:27:43.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:27:43.483: INFO: rc: 1
Feb 18 22:27:43.483: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:27:53.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:27:53.712: INFO: rc: 1
Feb 18 22:27:53.712: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:28:03.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:28:06.429: INFO: rc: 1
Feb 18 22:28:06.429: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:28:16.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:28:16.676: INFO: rc: 1
Feb 18 22:28:16.676: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:28:26.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:28:26.809: INFO: rc: 1
Feb 18 22:28:26.809: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:28:36.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:28:36.975: INFO: rc: 1
Feb 18 22:28:36.975: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:28:46.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:28:47.124: INFO: rc: 1
Feb 18 22:28:47.125: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:28:57.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:28:57.277: INFO: rc: 1
Feb 18 22:28:57.278: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:29:07.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:29:07.485: INFO: rc: 1
Feb 18 22:29:07.486: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 18 22:29:17.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:29:17.659: INFO: rc: 1
Feb 18 22:29:17.659: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Feb 18 22:29:17.659: INFO: Scaling statefulset ss to 0
Feb 18 22:29:17.675: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb 18 22:29:17.678: INFO: Deleting all statefulset in ns statefulset-6553
Feb 18 22:29:17.683: INFO: Scaling statefulset ss to 0
Feb 18 22:29:17.694: INFO: Waiting for statefulset status.replicas updated to 0
Feb 18 22:29:17.699: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:29:17.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6553" for this suite.

• [SLOW TEST:372.069 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":188,"skipped":3140,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:29:17.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-2080
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Feb 18 22:29:17.975: INFO: Found 0 stateful pods, waiting for 3
Feb 18 22:29:27.986: INFO: Found 2 stateful pods, waiting for 3
Feb 18 22:29:37.988: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 22:29:37.988: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 22:29:37.988: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 18 22:29:47.983: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 22:29:47.983: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 22:29:47.983: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 22:29:47.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2080 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 18 22:29:48.497: INFO: stderr: "I0218 22:29:48.232581    2942 log.go:172] (0xc000aa0000) (0xc000a26000) Create stream\nI0218 22:29:48.232790    2942 log.go:172] (0xc000aa0000) (0xc000a26000) Stream added, broadcasting: 1\nI0218 22:29:48.238632    2942 log.go:172] (0xc000aa0000) Reply frame received for 1\nI0218 22:29:48.238923    2942 log.go:172] (0xc000aa0000) (0xc000922000) Create stream\nI0218 22:29:48.238968    2942 log.go:172] (0xc000aa0000) (0xc000922000) Stream added, broadcasting: 3\nI0218 22:29:48.240682    2942 log.go:172] (0xc000aa0000) Reply frame received for 3\nI0218 22:29:48.240727    2942 log.go:172] (0xc000aa0000) (0xc000a26140) Create stream\nI0218 22:29:48.240734    2942 log.go:172] (0xc000aa0000) (0xc000a26140) Stream added, broadcasting: 5\nI0218 22:29:48.242263    2942 log.go:172] (0xc000aa0000) Reply frame received for 5\nI0218 22:29:48.335307    2942 log.go:172] (0xc000aa0000) Data frame received for 5\nI0218 22:29:48.335421    2942 log.go:172] (0xc000a26140) (5) Data frame handling\nI0218 22:29:48.335489    2942 log.go:172] (0xc000a26140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0218 22:29:48.355931    2942 log.go:172] (0xc000aa0000) Data frame received for 3\nI0218 22:29:48.356008    2942 log.go:172] (0xc000922000) (3) Data frame handling\nI0218 22:29:48.356034    2942 log.go:172] (0xc000922000) (3) Data frame sent\nI0218 22:29:48.474037    2942 log.go:172] (0xc000aa0000) (0xc000922000) Stream removed, broadcasting: 3\nI0218 22:29:48.474791    2942 log.go:172] (0xc000aa0000) Data frame received for 1\nI0218 22:29:48.475061    2942 log.go:172] (0xc000aa0000) (0xc000a26140) Stream removed, broadcasting: 5\nI0218 22:29:48.475259    2942 log.go:172] (0xc000a26000) (1) Data frame handling\nI0218 22:29:48.475354    2942 log.go:172] (0xc000a26000) (1) Data frame sent\nI0218 22:29:48.475371    2942 log.go:172] (0xc000aa0000) (0xc000a26000) Stream removed, broadcasting: 1\nI0218 22:29:48.475402    2942 log.go:172] (0xc000aa0000) Go away received\nI0218 22:29:48.477504    2942 log.go:172] (0xc000aa0000) (0xc000a26000) Stream removed, broadcasting: 1\nI0218 22:29:48.477540    2942 log.go:172] (0xc000aa0000) (0xc000922000) Stream removed, broadcasting: 3\nI0218 22:29:48.477570    2942 log.go:172] (0xc000aa0000) (0xc000a26140) Stream removed, broadcasting: 5\n"
Feb 18 22:29:48.498: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 18 22:29:48.498: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb 18 22:29:58.557: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 18 22:30:08.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2080 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:30:09.025: INFO: stderr: "I0218 22:30:08.794629    2964 log.go:172] (0xc000ad66e0) (0xc000ab4280) Create stream\nI0218 22:30:08.794796    2964 log.go:172] (0xc000ad66e0) (0xc000ab4280) Stream added, broadcasting: 1\nI0218 22:30:08.797125    2964 log.go:172] (0xc000ad66e0) Reply frame received for 1\nI0218 22:30:08.797161    2964 log.go:172] (0xc000ad66e0) (0xc000ab4320) Create stream\nI0218 22:30:08.797176    2964 log.go:172] (0xc000ad66e0) (0xc000ab4320) Stream added, broadcasting: 3\nI0218 22:30:08.798252    2964 log.go:172] (0xc000ad66e0) Reply frame received for 3\nI0218 22:30:08.798274    2964 log.go:172] (0xc000ad66e0) (0xc000ab43c0) Create stream\nI0218 22:30:08.798280    2964 log.go:172] (0xc000ad66e0) (0xc000ab43c0) Stream added, broadcasting: 5\nI0218 22:30:08.800582    2964 log.go:172] (0xc000ad66e0) Reply frame received for 5\nI0218 22:30:08.884387    2964 log.go:172] (0xc000ad66e0) Data frame received for 5\nI0218 22:30:08.884450    2964 log.go:172] (0xc000ab43c0) (5) Data frame handling\nI0218 22:30:08.884513    2964 log.go:172] (0xc000ab43c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0218 22:30:08.885164    2964 log.go:172] (0xc000ad66e0) Data frame received for 3\nI0218 22:30:08.885179    2964 log.go:172] (0xc000ab4320) (3) Data frame handling\nI0218 22:30:08.885208    2964 log.go:172] (0xc000ab4320) (3) Data frame sent\nI0218 22:30:09.016496    2964 log.go:172] (0xc000ad66e0) (0xc000ab4320) Stream removed, broadcasting: 3\nI0218 22:30:09.017220    2964 log.go:172] (0xc000ad66e0) Data frame received for 1\nI0218 22:30:09.017250    2964 log.go:172] (0xc000ad66e0) (0xc000ab43c0) Stream removed, broadcasting: 5\nI0218 22:30:09.017292    2964 log.go:172] (0xc000ab4280) (1) Data frame handling\nI0218 22:30:09.017313    2964 log.go:172] (0xc000ab4280) (1) Data frame sent\nI0218 22:30:09.017326    2964 log.go:172] (0xc000ad66e0) (0xc000ab4280) Stream removed, broadcasting: 1\nI0218 22:30:09.017340    2964 log.go:172] (0xc000ad66e0) Go away received\nI0218 22:30:09.018419    2964 log.go:172] (0xc000ad66e0) (0xc000ab4280) Stream removed, broadcasting: 1\nI0218 22:30:09.018470    2964 log.go:172] (0xc000ad66e0) (0xc000ab4320) Stream removed, broadcasting: 3\nI0218 22:30:09.018509    2964 log.go:172] (0xc000ad66e0) (0xc000ab43c0) Stream removed, broadcasting: 5\n"
Feb 18 22:30:09.025: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 18 22:30:09.025: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 18 22:30:19.910: INFO: Waiting for StatefulSet statefulset-2080/ss2 to complete update
Feb 18 22:30:19.910: INFO: Waiting for Pod statefulset-2080/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 18 22:30:19.910: INFO: Waiting for Pod statefulset-2080/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 18 22:30:29.925: INFO: Waiting for StatefulSet statefulset-2080/ss2 to complete update
Feb 18 22:30:29.925: INFO: Waiting for Pod statefulset-2080/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 18 22:30:29.925: INFO: Waiting for Pod statefulset-2080/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 18 22:30:39.922: INFO: Waiting for StatefulSet statefulset-2080/ss2 to complete update
Feb 18 22:30:39.922: INFO: Waiting for Pod statefulset-2080/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 18 22:30:49.920: INFO: Waiting for StatefulSet statefulset-2080/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 18 22:30:59.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2080 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 18 22:31:00.399: INFO: stderr: "I0218 22:31:00.165797    2984 log.go:172] (0xc00072a6e0) (0xc000726000) Create stream\nI0218 22:31:00.165950    2984 log.go:172] (0xc00072a6e0) (0xc000726000) Stream added, broadcasting: 1\nI0218 22:31:00.169330    2984 log.go:172] (0xc00072a6e0) Reply frame received for 1\nI0218 22:31:00.169358    2984 log.go:172] (0xc00072a6e0) (0xc0008100a0) Create stream\nI0218 22:31:00.169367    2984 log.go:172] (0xc00072a6e0) (0xc0008100a0) Stream added, broadcasting: 3\nI0218 22:31:00.170366    2984 log.go:172] (0xc00072a6e0) Reply frame received for 3\nI0218 22:31:00.170390    2984 log.go:172] (0xc00072a6e0) (0xc000651c20) Create stream\nI0218 22:31:00.170406    2984 log.go:172] (0xc00072a6e0) (0xc000651c20) Stream added, broadcasting: 5\nI0218 22:31:00.172291    2984 log.go:172] (0xc00072a6e0) Reply frame received for 5\nI0218 22:31:00.238159    2984 log.go:172] (0xc00072a6e0) Data frame received for 5\nI0218 22:31:00.238217    2984 log.go:172] (0xc000651c20) (5) Data frame handling\nI0218 22:31:00.238235    2984 log.go:172] (0xc000651c20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0218 22:31:00.276101    2984 log.go:172] (0xc00072a6e0) Data frame received for 3\nI0218 22:31:00.276129    2984 log.go:172] (0xc0008100a0) (3) Data frame handling\nI0218 22:31:00.276155    2984 log.go:172] (0xc0008100a0) (3) Data frame sent\nI0218 22:31:00.389034    2984 log.go:172] (0xc00072a6e0) (0xc0008100a0) Stream removed, broadcasting: 3\nI0218 22:31:00.389181    2984 log.go:172] (0xc00072a6e0) Data frame received for 1\nI0218 22:31:00.389193    2984 log.go:172] (0xc000726000) (1) Data frame handling\nI0218 22:31:00.389209    2984 log.go:172] (0xc000726000) (1) Data frame sent\nI0218 22:31:00.389278    2984 log.go:172] (0xc00072a6e0) (0xc000726000) Stream removed, broadcasting: 1\nI0218 22:31:00.390232    2984 log.go:172] (0xc00072a6e0) (0xc000651c20) Stream removed, broadcasting: 5\nI0218 22:31:00.390279    2984 log.go:172] (0xc00072a6e0) (0xc000726000) Stream removed, broadcasting: 1\nI0218 22:31:00.390290    2984 log.go:172] (0xc00072a6e0) (0xc0008100a0) Stream removed, broadcasting: 3\nI0218 22:31:00.390298    2984 log.go:172] (0xc00072a6e0) (0xc000651c20) Stream removed, broadcasting: 5\n"
Feb 18 22:31:00.400: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 18 22:31:00.400: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 18 22:31:00.476: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 18 22:31:10.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2080 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 22:31:10.931: INFO: stderr: "I0218 22:31:10.736318    3007 log.go:172] (0xc000a3ef20) (0xc000976a00) Create stream\nI0218 22:31:10.736459    3007 log.go:172] (0xc000a3ef20) (0xc000976a00) Stream added, broadcasting: 1\nI0218 22:31:10.741250    3007 log.go:172] (0xc000a3ef20) Reply frame received for 1\nI0218 22:31:10.741288    3007 log.go:172] (0xc000a3ef20) (0xc0005286e0) Create stream\nI0218 22:31:10.741299    3007 log.go:172] (0xc000a3ef20) (0xc0005286e0) Stream added, broadcasting: 3\nI0218 22:31:10.742340    3007 log.go:172] (0xc000a3ef20) Reply frame received for 3\nI0218 22:31:10.742363    3007 log.go:172] (0xc000a3ef20) (0xc0001fb4a0) Create stream\nI0218 22:31:10.742372    3007 log.go:172] (0xc000a3ef20) (0xc0001fb4a0) Stream added, broadcasting: 5\nI0218 22:31:10.747671    3007 log.go:172] (0xc000a3ef20) Reply frame received for 5\nI0218 22:31:10.814759    3007 log.go:172] (0xc000a3ef20) Data frame received for 3\nI0218 22:31:10.814817    3007 log.go:172] (0xc0005286e0) (3) Data frame handling\nI0218 22:31:10.814829    3007 log.go:172] (0xc0005286e0) (3) Data frame sent\nI0218 22:31:10.814900    3007 log.go:172] (0xc000a3ef20) Data frame received for 5\nI0218 22:31:10.814915    3007 log.go:172] (0xc0001fb4a0) (5) Data frame handling\nI0218 22:31:10.814923    3007 log.go:172] (0xc0001fb4a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0218 22:31:10.919837    3007 log.go:172] (0xc000a3ef20) Data frame received for 1\nI0218 22:31:10.920003    3007 log.go:172] (0xc000a3ef20) (0xc0001fb4a0) Stream removed, broadcasting: 5\nI0218 22:31:10.920083    3007 log.go:172] (0xc000976a00) (1) Data frame handling\nI0218 22:31:10.920106    3007 log.go:172] (0xc000976a00) (1) Data frame sent\nI0218 22:31:10.920140    3007 log.go:172] (0xc000a3ef20) (0xc0005286e0) Stream removed, broadcasting: 3\nI0218 22:31:10.920190    3007 log.go:172] (0xc000a3ef20) (0xc000976a00) Stream removed, broadcasting: 1\nI0218 22:31:10.920217    3007 log.go:172] (0xc000a3ef20) Go away received\nI0218 22:31:10.921338    3007 log.go:172] (0xc000a3ef20) (0xc000976a00) Stream removed, broadcasting: 1\nI0218 22:31:10.921365    3007 log.go:172] (0xc000a3ef20) (0xc0005286e0) Stream removed, broadcasting: 3\nI0218 22:31:10.921373    3007 log.go:172] (0xc000a3ef20) (0xc0001fb4a0) Stream removed, broadcasting: 5\n"
Feb 18 22:31:10.931: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 18 22:31:10.931: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 18 22:31:21.023: INFO: Waiting for StatefulSet statefulset-2080/ss2 to complete update
Feb 18 22:31:21.023: INFO: Waiting for Pod statefulset-2080/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 18 22:31:21.023: INFO: Waiting for Pod statefulset-2080/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 18 22:31:31.058: INFO: Waiting for StatefulSet statefulset-2080/ss2 to complete update
Feb 18 22:31:31.058: INFO: Waiting for Pod statefulset-2080/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 18 22:31:41.066: INFO: Waiting for StatefulSet statefulset-2080/ss2 to complete update
Feb 18 22:31:41.066: INFO: Waiting for Pod statefulset-2080/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb 18 22:31:51.033: INFO: Deleting all statefulset in ns statefulset-2080
Feb 18 22:31:51.037: INFO: Scaling statefulset ss2 to 0
Feb 18 22:32:21.076: INFO: Waiting for statefulset status.replicas updated to 0
Feb 18 22:32:21.080: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:32:21.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2080" for this suite.

• [SLOW TEST:183.365 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":189,"skipped":3157,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:32:21.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 18 22:32:21.297: INFO: Waiting up to 5m0s for pod "pod-65c39e98-42bd-47ff-b578-7e11fc4a7a25" in namespace "emptydir-5877" to be "success or failure"
Feb 18 22:32:21.304: INFO: Pod "pod-65c39e98-42bd-47ff-b578-7e11fc4a7a25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.172543ms
Feb 18 22:32:23.314: INFO: Pod "pod-65c39e98-42bd-47ff-b578-7e11fc4a7a25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016183177s
Feb 18 22:32:25.319: INFO: Pod "pod-65c39e98-42bd-47ff-b578-7e11fc4a7a25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021622278s
Feb 18 22:32:27.349: INFO: Pod "pod-65c39e98-42bd-47ff-b578-7e11fc4a7a25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051033708s
Feb 18 22:32:29.357: INFO: Pod "pod-65c39e98-42bd-47ff-b578-7e11fc4a7a25": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058967774s
Feb 18 22:32:31.365: INFO: Pod "pod-65c39e98-42bd-47ff-b578-7e11fc4a7a25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067427235s
STEP: Saw pod success
Feb 18 22:32:31.365: INFO: Pod "pod-65c39e98-42bd-47ff-b578-7e11fc4a7a25" satisfied condition "success or failure"
Feb 18 22:32:31.368: INFO: Trying to get logs from node jerma-node pod pod-65c39e98-42bd-47ff-b578-7e11fc4a7a25 container test-container: 
STEP: delete the pod
Feb 18 22:32:31.738: INFO: Waiting for pod pod-65c39e98-42bd-47ff-b578-7e11fc4a7a25 to disappear
Feb 18 22:32:31.763: INFO: Pod pod-65c39e98-42bd-47ff-b578-7e11fc4a7a25 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:32:31.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5877" for this suite.

• [SLOW TEST:10.632 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3165,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:32:31.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 18 22:32:40.043: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:32:40.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6624" for this suite.

• [SLOW TEST:8.422 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3182,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:32:40.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Feb 18 22:32:40.461: INFO: Waiting up to 5m0s for pod "client-containers-0966d83f-d5e2-4583-99ba-03faf16156e8" in namespace "containers-9169" to be "success or failure"
Feb 18 22:32:40.477: INFO: Pod "client-containers-0966d83f-d5e2-4583-99ba-03faf16156e8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.085701ms
Feb 18 22:32:42.489: INFO: Pod "client-containers-0966d83f-d5e2-4583-99ba-03faf16156e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028130782s
Feb 18 22:32:44.500: INFO: Pod "client-containers-0966d83f-d5e2-4583-99ba-03faf16156e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039250368s
Feb 18 22:32:46.512: INFO: Pod "client-containers-0966d83f-d5e2-4583-99ba-03faf16156e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050993419s
Feb 18 22:32:48.531: INFO: Pod "client-containers-0966d83f-d5e2-4583-99ba-03faf16156e8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069879425s
Feb 18 22:32:50.546: INFO: Pod "client-containers-0966d83f-d5e2-4583-99ba-03faf16156e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084815917s
STEP: Saw pod success
Feb 18 22:32:50.546: INFO: Pod "client-containers-0966d83f-d5e2-4583-99ba-03faf16156e8" satisfied condition "success or failure"
Feb 18 22:32:50.551: INFO: Trying to get logs from node jerma-node pod client-containers-0966d83f-d5e2-4583-99ba-03faf16156e8 container test-container: 
STEP: delete the pod
Feb 18 22:32:50.601: INFO: Waiting for pod client-containers-0966d83f-d5e2-4583-99ba-03faf16156e8 to disappear
Feb 18 22:32:50.639: INFO: Pod client-containers-0966d83f-d5e2-4583-99ba-03faf16156e8 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:32:50.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9169" for this suite.

• [SLOW TEST:10.458 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3199,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:32:50.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Feb 18 22:33:03.277: INFO: Successfully updated pod "adopt-release-ch7bg"
STEP: Checking that the Job readopts the Pod
Feb 18 22:33:03.278: INFO: Waiting up to 15m0s for pod "adopt-release-ch7bg" in namespace "job-5403" to be "adopted"
Feb 18 22:33:03.291: INFO: Pod "adopt-release-ch7bg": Phase="Running", Reason="", readiness=true. Elapsed: 12.997853ms
Feb 18 22:33:05.300: INFO: Pod "adopt-release-ch7bg": Phase="Running", Reason="", readiness=true. Elapsed: 2.022107321s
Feb 18 22:33:05.300: INFO: Pod "adopt-release-ch7bg" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Feb 18 22:33:05.827: INFO: Successfully updated pod "adopt-release-ch7bg"
STEP: Checking that the Job releases the Pod
Feb 18 22:33:05.827: INFO: Waiting up to 15m0s for pod "adopt-release-ch7bg" in namespace "job-5403" to be "released"
Feb 18 22:33:05.858: INFO: Pod "adopt-release-ch7bg": Phase="Running", Reason="", readiness=true. Elapsed: 30.784895ms
Feb 18 22:33:05.858: INFO: Pod "adopt-release-ch7bg" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:33:05.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5403" for this suite.

• [SLOW TEST:15.298 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":193,"skipped":3214,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:33:05.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Feb 18 22:33:06.044: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:33:26.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3496" for this suite.

• [SLOW TEST:20.618 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":194,"skipped":3220,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:33:26.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 22:33:27.750: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 18 22:33:29.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:33:31.789: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:33:33.981: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:33:35.774: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:33:37.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662007, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 22:33:40.798: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:33:53.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1925" for this suite.
STEP: Destroying namespace "webhook-1925-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:26.733 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":195,"skipped":3258,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:33:53.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-7023
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-7023
STEP: creating replication controller externalsvc in namespace services-7023
I0218 22:33:53.599370       8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7023, replica count: 2
I0218 22:33:56.650766       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:33:59.651598       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:34:02.653653       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:34:05.654351       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 22:34:08.654766       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Feb 18 22:34:08.737: INFO: Creating new exec pod
Feb 18 22:34:16.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7023 execpodhhp99 -- /bin/sh -x -c nslookup clusterip-service'
Feb 18 22:34:17.244: INFO: stderr: "I0218 22:34:17.053056    3027 log.go:172] (0xc0000ea370) (0xc000402640) Create stream\nI0218 22:34:17.053410    3027 log.go:172] (0xc0000ea370) (0xc000402640) Stream added, broadcasting: 1\nI0218 22:34:17.058691    3027 log.go:172] (0xc0000ea370) Reply frame received for 1\nI0218 22:34:17.058845    3027 log.go:172] (0xc0000ea370) (0xc00075c6e0) Create stream\nI0218 22:34:17.058876    3027 log.go:172] (0xc0000ea370) (0xc00075c6e0) Stream added, broadcasting: 3\nI0218 22:34:17.062447    3027 log.go:172] (0xc0000ea370) Reply frame received for 3\nI0218 22:34:17.062501    3027 log.go:172] (0xc0000ea370) (0xc00075c780) Create stream\nI0218 22:34:17.062517    3027 log.go:172] (0xc0000ea370) (0xc00075c780) Stream added, broadcasting: 5\nI0218 22:34:17.066253    3027 log.go:172] (0xc0000ea370) Reply frame received for 5\nI0218 22:34:17.141392    3027 log.go:172] (0xc0000ea370) Data frame received for 5\nI0218 22:34:17.141489    3027 log.go:172] (0xc00075c780) (5) Data frame handling\nI0218 22:34:17.141524    3027 log.go:172] (0xc00075c780) (5) Data frame sent\n+ nslookup clusterip-service\nI0218 22:34:17.157835    3027 log.go:172] (0xc0000ea370) Data frame received for 3\nI0218 22:34:17.157863    3027 log.go:172] (0xc00075c6e0) (3) Data frame handling\nI0218 22:34:17.157880    3027 log.go:172] (0xc00075c6e0) (3) Data frame sent\nI0218 22:34:17.159743    3027 log.go:172] (0xc0000ea370) Data frame received for 3\nI0218 22:34:17.159762    3027 log.go:172] (0xc00075c6e0) (3) Data frame handling\nI0218 22:34:17.159775    3027 log.go:172] (0xc00075c6e0) (3) Data frame sent\nI0218 22:34:17.228147    3027 log.go:172] (0xc0000ea370) (0xc00075c780) Stream removed, broadcasting: 5\nI0218 22:34:17.228372    3027 log.go:172] (0xc0000ea370) Data frame received for 1\nI0218 22:34:17.228406    3027 log.go:172] (0xc000402640) (1) Data frame handling\nI0218 22:34:17.228436    3027 log.go:172] (0xc000402640) (1) Data frame sent\nI0218 22:34:17.228494    3027 log.go:172] (0xc0000ea370) (0xc00075c6e0) Stream removed, broadcasting: 3\nI0218 22:34:17.228594    3027 log.go:172] (0xc0000ea370) (0xc000402640) Stream removed, broadcasting: 1\nI0218 22:34:17.229482    3027 log.go:172] (0xc0000ea370) Go away received\nI0218 22:34:17.230205    3027 log.go:172] (0xc0000ea370) (0xc000402640) Stream removed, broadcasting: 1\nI0218 22:34:17.230244    3027 log.go:172] (0xc0000ea370) (0xc00075c6e0) Stream removed, broadcasting: 3\nI0218 22:34:17.230265    3027 log.go:172] (0xc0000ea370) (0xc00075c780) Stream removed, broadcasting: 5\n"
Feb 18 22:34:17.245: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-7023.svc.cluster.local\tcanonical name = externalsvc.services-7023.svc.cluster.local.\nName:\texternalsvc.services-7023.svc.cluster.local\nAddress: 10.96.186.255\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-7023, will wait for the garbage collector to delete the pods
Feb 18 22:34:17.308: INFO: Deleting ReplicationController externalsvc took: 6.249035ms
Feb 18 22:34:17.409: INFO: Terminating ReplicationController externalsvc pods took: 100.389286ms
Feb 18 22:34:33.238: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:34:33.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7023" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:39.949 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":196,"skipped":3287,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:34:33.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-afbe014d-01a4-4059-a662-f0dd17243e9b
STEP: Creating a pod to test consume secrets
Feb 18 22:34:33.358: INFO: Waiting up to 5m0s for pod "pod-secrets-521a97ff-bf87-4c32-aa89-2c71b798dedc" in namespace "secrets-2362" to be "success or failure"
Feb 18 22:34:33.381: INFO: Pod "pod-secrets-521a97ff-bf87-4c32-aa89-2c71b798dedc": Phase="Pending", Reason="", readiness=false. Elapsed: 23.237495ms
Feb 18 22:34:35.387: INFO: Pod "pod-secrets-521a97ff-bf87-4c32-aa89-2c71b798dedc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029479753s
Feb 18 22:34:37.396: INFO: Pod "pod-secrets-521a97ff-bf87-4c32-aa89-2c71b798dedc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038037366s
Feb 18 22:34:39.401: INFO: Pod "pod-secrets-521a97ff-bf87-4c32-aa89-2c71b798dedc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043232223s
Feb 18 22:34:41.409: INFO: Pod "pod-secrets-521a97ff-bf87-4c32-aa89-2c71b798dedc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0510338s
Feb 18 22:34:43.419: INFO: Pod "pod-secrets-521a97ff-bf87-4c32-aa89-2c71b798dedc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.060887553s
Feb 18 22:34:45.426: INFO: Pod "pod-secrets-521a97ff-bf87-4c32-aa89-2c71b798dedc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.068382457s
STEP: Saw pod success
Feb 18 22:34:45.426: INFO: Pod "pod-secrets-521a97ff-bf87-4c32-aa89-2c71b798dedc" satisfied condition "success or failure"
Feb 18 22:34:45.430: INFO: Trying to get logs from node jerma-node pod pod-secrets-521a97ff-bf87-4c32-aa89-2c71b798dedc container secret-volume-test: 
STEP: delete the pod
Feb 18 22:34:45.543: INFO: Waiting for pod pod-secrets-521a97ff-bf87-4c32-aa89-2c71b798dedc to disappear
Feb 18 22:34:45.548: INFO: Pod pod-secrets-521a97ff-bf87-4c32-aa89-2c71b798dedc no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:34:45.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2362" for this suite.

• [SLOW TEST:12.290 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3314,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:34:45.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 18 22:35:04.464: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 18 22:35:04.473: INFO: Pod pod-with-poststart-http-hook still exists
Feb 18 22:35:06.474: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 18 22:35:06.482: INFO: Pod pod-with-poststart-http-hook still exists
Feb 18 22:35:08.474: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 18 22:35:08.484: INFO: Pod pod-with-poststart-http-hook still exists
Feb 18 22:35:10.474: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 18 22:35:10.503: INFO: Pod pod-with-poststart-http-hook still exists
Feb 18 22:35:12.474: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 18 22:35:12.482: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:35:12.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1441" for this suite.

• [SLOW TEST:26.935 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3345,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:35:12.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-1911
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 18 22:35:12.575: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 18 22:35:50.826: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.2 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1911 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 22:35:50.826: INFO: >>> kubeConfig: /root/.kube/config
I0218 22:35:50.896814       8 log.go:172] (0xc002b50420) (0xc000eb4a00) Create stream
I0218 22:35:50.896956       8 log.go:172] (0xc002b50420) (0xc000eb4a00) Stream added, broadcasting: 1
I0218 22:35:50.917061       8 log.go:172] (0xc002b50420) Reply frame received for 1
I0218 22:35:50.917700       8 log.go:172] (0xc002b50420) (0xc001158780) Create stream
I0218 22:35:50.917816       8 log.go:172] (0xc002b50420) (0xc001158780) Stream added, broadcasting: 3
I0218 22:35:50.924775       8 log.go:172] (0xc002b50420) Reply frame received for 3
I0218 22:35:50.924901       8 log.go:172] (0xc002b50420) (0xc000eb4d20) Create stream
I0218 22:35:50.924932       8 log.go:172] (0xc002b50420) (0xc000eb4d20) Stream added, broadcasting: 5
I0218 22:35:50.928007       8 log.go:172] (0xc002b50420) Reply frame received for 5
I0218 22:35:52.008220       8 log.go:172] (0xc002b50420) Data frame received for 3
I0218 22:35:52.008405       8 log.go:172] (0xc001158780) (3) Data frame handling
I0218 22:35:52.008470       8 log.go:172] (0xc001158780) (3) Data frame sent
I0218 22:35:52.116829       8 log.go:172] (0xc002b50420) (0xc001158780) Stream removed, broadcasting: 3
I0218 22:35:52.117142       8 log.go:172] (0xc002b50420) Data frame received for 1
I0218 22:35:52.117417       8 log.go:172] (0xc002b50420) (0xc000eb4d20) Stream removed, broadcasting: 5
I0218 22:35:52.117565       8 log.go:172] (0xc000eb4a00) (1) Data frame handling
I0218 22:35:52.117627       8 log.go:172] (0xc000eb4a00) (1) Data frame sent
I0218 22:35:52.117682       8 log.go:172] (0xc002b50420) (0xc000eb4a00) Stream removed, broadcasting: 1
I0218 22:35:52.117739       8 log.go:172] (0xc002b50420) Go away received
I0218 22:35:52.118097       8 log.go:172] (0xc002b50420) (0xc000eb4a00) Stream removed, broadcasting: 1
I0218 22:35:52.118135       8 log.go:172] (0xc002b50420) (0xc001158780) Stream removed, broadcasting: 3
I0218 22:35:52.118145       8 log.go:172] (0xc002b50420) (0xc000eb4d20) Stream removed, broadcasting: 5
Feb 18 22:35:52.118: INFO: Found all expected endpoints: [netserver-0]
Feb 18 22:35:52.126: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1911 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 22:35:52.126: INFO: >>> kubeConfig: /root/.kube/config
I0218 22:35:52.199650       8 log.go:172] (0xc002b06370) (0xc000ad3040) Create stream
I0218 22:35:52.199830       8 log.go:172] (0xc002b06370) (0xc000ad3040) Stream added, broadcasting: 1
I0218 22:35:52.207329       8 log.go:172] (0xc002b06370) Reply frame received for 1
I0218 22:35:52.207503       8 log.go:172] (0xc002b06370) (0xc000eb4f00) Create stream
I0218 22:35:52.207546       8 log.go:172] (0xc002b06370) (0xc000eb4f00) Stream added, broadcasting: 3
I0218 22:35:52.209712       8 log.go:172] (0xc002b06370) Reply frame received for 3
I0218 22:35:52.209803       8 log.go:172] (0xc002b06370) (0xc000e0a500) Create stream
I0218 22:35:52.209820       8 log.go:172] (0xc002b06370) (0xc000e0a500) Stream added, broadcasting: 5
I0218 22:35:52.211687       8 log.go:172] (0xc002b06370) Reply frame received for 5
I0218 22:35:53.312667       8 log.go:172] (0xc002b06370) Data frame received for 3
I0218 22:35:53.312835       8 log.go:172] (0xc000eb4f00) (3) Data frame handling
I0218 22:35:53.312960       8 log.go:172] (0xc000eb4f00) (3) Data frame sent
I0218 22:35:53.426536       8 log.go:172] (0xc002b06370) Data frame received for 1
I0218 22:35:53.426702       8 log.go:172] (0xc000ad3040) (1) Data frame handling
I0218 22:35:53.426739       8 log.go:172] (0xc000ad3040) (1) Data frame sent
I0218 22:35:53.427028       8 log.go:172] (0xc002b06370) (0xc000ad3040) Stream removed, broadcasting: 1
I0218 22:35:53.427154       8 log.go:172] (0xc002b06370) (0xc000eb4f00) Stream removed, broadcasting: 3
I0218 22:35:53.427647       8 log.go:172] (0xc002b06370) (0xc000e0a500) Stream removed, broadcasting: 5
I0218 22:35:53.427842       8 log.go:172] (0xc002b06370) (0xc000ad3040) Stream removed, broadcasting: 1
I0218 22:35:53.427898       8 log.go:172] (0xc002b06370) (0xc000eb4f00) Stream removed, broadcasting: 3
I0218 22:35:53.427938       8 log.go:172] (0xc002b06370) Go away received
I0218 22:35:53.428013       8 log.go:172] (0xc002b06370) (0xc000e0a500) Stream removed, broadcasting: 5
Feb 18 22:35:53.428: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:35:53.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1911" for this suite.

• [SLOW TEST:40.951 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3384,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:35:53.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-4945
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 18 22:35:53.564: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 18 22:36:31.876: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=udp&host=10.44.0.3&port=8081&tries=1'] Namespace:pod-network-test-4945 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 22:36:31.877: INFO: >>> kubeConfig: /root/.kube/config
I0218 22:36:31.934848       8 log.go:172] (0xc0023f5ef0) (0xc00097caa0) Create stream
I0218 22:36:31.934921       8 log.go:172] (0xc0023f5ef0) (0xc00097caa0) Stream added, broadcasting: 1
I0218 22:36:31.941047       8 log.go:172] (0xc0023f5ef0) Reply frame received for 1
I0218 22:36:31.941139       8 log.go:172] (0xc0023f5ef0) (0xc0013190e0) Create stream
I0218 22:36:31.941186       8 log.go:172] (0xc0023f5ef0) (0xc0013190e0) Stream added, broadcasting: 3
I0218 22:36:31.944592       8 log.go:172] (0xc0023f5ef0) Reply frame received for 3
I0218 22:36:31.944636       8 log.go:172] (0xc0023f5ef0) (0xc001319540) Create stream
I0218 22:36:31.944649       8 log.go:172] (0xc0023f5ef0) (0xc001319540) Stream added, broadcasting: 5
I0218 22:36:31.946284       8 log.go:172] (0xc0023f5ef0) Reply frame received for 5
I0218 22:36:32.052819       8 log.go:172] (0xc0023f5ef0) Data frame received for 3
I0218 22:36:32.053067       8 log.go:172] (0xc0013190e0) (3) Data frame handling
I0218 22:36:32.053148       8 log.go:172] (0xc0013190e0) (3) Data frame sent
I0218 22:36:32.166511       8 log.go:172] (0xc0023f5ef0) (0xc0013190e0) Stream removed, broadcasting: 3
I0218 22:36:32.166910       8 log.go:172] (0xc0023f5ef0) Data frame received for 1
I0218 22:36:32.166927       8 log.go:172] (0xc00097caa0) (1) Data frame handling
I0218 22:36:32.166970       8 log.go:172] (0xc00097caa0) (1) Data frame sent
I0218 22:36:32.166980       8 log.go:172] (0xc0023f5ef0) (0xc00097caa0) Stream removed, broadcasting: 1
I0218 22:36:32.167265       8 log.go:172] (0xc0023f5ef0) (0xc001319540) Stream removed, broadcasting: 5
I0218 22:36:32.167323       8 log.go:172] (0xc0023f5ef0) (0xc00097caa0) Stream removed, broadcasting: 1
I0218 22:36:32.167370       8 log.go:172] (0xc0023f5ef0) (0xc0013190e0) Stream removed, broadcasting: 3
I0218 22:36:32.167383       8 log.go:172] (0xc0023f5ef0) (0xc001319540) Stream removed, broadcasting: 5
I0218 22:36:32.167658       8 log.go:172] (0xc0023f5ef0) Go away received
Feb 18 22:36:32.168: INFO: Waiting for responses: map[]
Feb 18 22:36:32.202: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=udp&host=10.32.0.5&port=8081&tries=1'] Namespace:pod-network-test-4945 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 22:36:32.202: INFO: >>> kubeConfig: /root/.kube/config
I0218 22:36:32.261487       8 log.go:172] (0xc002cf22c0) (0xc0016463c0) Create stream
I0218 22:36:32.261605       8 log.go:172] (0xc002cf22c0) (0xc0016463c0) Stream added, broadcasting: 1
I0218 22:36:32.264883       8 log.go:172] (0xc002cf22c0) Reply frame received for 1
I0218 22:36:32.264939       8 log.go:172] (0xc002cf22c0) (0xc0011590e0) Create stream
I0218 22:36:32.264957       8 log.go:172] (0xc002cf22c0) (0xc0011590e0) Stream added, broadcasting: 3
I0218 22:36:32.265863       8 log.go:172] (0xc002cf22c0) Reply frame received for 3
I0218 22:36:32.265884       8 log.go:172] (0xc002cf22c0) (0xc0016465a0) Create stream
I0218 22:36:32.265895       8 log.go:172] (0xc002cf22c0) (0xc0016465a0) Stream added, broadcasting: 5
I0218 22:36:32.267456       8 log.go:172] (0xc002cf22c0) Reply frame received for 5
I0218 22:36:32.358725       8 log.go:172] (0xc002cf22c0) Data frame received for 3
I0218 22:36:32.358863       8 log.go:172] (0xc0011590e0) (3) Data frame handling
I0218 22:36:32.358906       8 log.go:172] (0xc0011590e0) (3) Data frame sent
I0218 22:36:32.431573       8 log.go:172] (0xc002cf22c0) (0xc0011590e0) Stream removed, broadcasting: 3
I0218 22:36:32.431735       8 log.go:172] (0xc002cf22c0) (0xc0016465a0) Stream removed, broadcasting: 5
I0218 22:36:32.431895       8 log.go:172] (0xc002cf22c0) Data frame received for 1
I0218 22:36:32.432028       8 log.go:172] (0xc0016463c0) (1) Data frame handling
I0218 22:36:32.432075       8 log.go:172] (0xc0016463c0) (1) Data frame sent
I0218 22:36:32.432114       8 log.go:172] (0xc002cf22c0) (0xc0016463c0) Stream removed, broadcasting: 1
I0218 22:36:32.432155       8 log.go:172] (0xc002cf22c0) Go away received
I0218 22:36:32.432467       8 log.go:172] (0xc002cf22c0) (0xc0016463c0) Stream removed, broadcasting: 1
I0218 22:36:32.432480       8 log.go:172] (0xc002cf22c0) (0xc0011590e0) Stream removed, broadcasting: 3
I0218 22:36:32.432486       8 log.go:172] (0xc002cf22c0) (0xc0016465a0) Stream removed, broadcasting: 5
Feb 18 22:36:32.432: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:36:32.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4945" for this suite.

• [SLOW TEST:38.999 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3422,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:36:32.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Feb 18 22:36:32.520: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 18 22:36:32.566: INFO: Waiting for terminating namespaces to be deleted...
Feb 18 22:36:32.571: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 18 22:36:32.583: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 18 22:36:32.583: INFO: 	Container weave ready: true, restart count 1
Feb 18 22:36:32.583: INFO: 	Container weave-npc ready: true, restart count 0
Feb 18 22:36:32.583: INFO: host-test-container-pod from pod-network-test-4945 started at 2020-02-18 22:36:23 +0000 UTC (1 container statuses recorded)
Feb 18 22:36:32.583: INFO: 	Container agnhost ready: true, restart count 0
Feb 18 22:36:32.583: INFO: test-container-pod from pod-network-test-4945 started at 2020-02-18 22:36:23 +0000 UTC (1 container statuses recorded)
Feb 18 22:36:32.583: INFO: 	Container webserver ready: true, restart count 0
Feb 18 22:36:32.583: INFO: netserver-0 from pod-network-test-4945 started at 2020-02-18 22:35:53 +0000 UTC (1 container statuses recorded)
Feb 18 22:36:32.583: INFO: 	Container webserver ready: true, restart count 0
Feb 18 22:36:32.583: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 18 22:36:32.583: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 18 22:36:32.583: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 18 22:36:32.608: INFO: netserver-1 from pod-network-test-4945 started at 2020-02-18 22:35:53 +0000 UTC (1 container statuses recorded)
Feb 18 22:36:32.608: INFO: 	Container webserver ready: true, restart count 0
Feb 18 22:36:32.608: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 18 22:36:32.608: INFO: 	Container kube-controller-manager ready: true, restart count 14
Feb 18 22:36:32.608: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 18 22:36:32.608: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 18 22:36:32.608: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 18 22:36:32.608: INFO: 	Container weave ready: true, restart count 0
Feb 18 22:36:32.608: INFO: 	Container weave-npc ready: true, restart count 0
Feb 18 22:36:32.608: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 18 22:36:32.608: INFO: 	Container kube-scheduler ready: true, restart count 18
Feb 18 22:36:32.608: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 18 22:36:32.608: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 18 22:36:32.608: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 18 22:36:32.608: INFO: 	Container etcd ready: true, restart count 1
Feb 18 22:36:32.608: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 18 22:36:32.608: INFO: 	Container coredns ready: true, restart count 0
Feb 18 22:36:32.608: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 18 22:36:32.608: INFO: 	Container coredns ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-0c9f9962-55b0-45b6-94a0-ae7e93536172 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-0c9f9962-55b0-45b6-94a0-ae7e93536172 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-0c9f9962-55b0-45b6-94a0-ae7e93536172
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:37:11.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9250" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:38.633 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":201,"skipped":3429,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:37:11.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 22:37:12.155: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 18 22:37:14.170: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:37:16.180: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:37:19.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:37:20.185: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:37:22.180: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662232, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 22:37:25.234: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 22:37:25.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1710-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:37:26.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5303" for this suite.
STEP: Destroying namespace "webhook-5303-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:15.651 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":202,"skipped":3430,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:37:26.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 22:37:50.921: INFO: Container started at 2020-02-18 22:37:34 +0000 UTC, pod became ready at 2020-02-18 22:37:50 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:37:50.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2875" for this suite.

• [SLOW TEST:24.219 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3437,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:37:50.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-5644/configmap-test-af3bc067-d2ec-4a5a-98a7-1d2cd36752a1
STEP: Creating a pod to test consume configMaps
Feb 18 22:37:51.094: INFO: Waiting up to 5m0s for pod "pod-configmaps-ff15857c-ffa0-477a-9f27-f043cf6cc2f2" in namespace "configmap-5644" to be "success or failure"
Feb 18 22:37:51.141: INFO: Pod "pod-configmaps-ff15857c-ffa0-477a-9f27-f043cf6cc2f2": Phase="Pending", Reason="", readiness=false. Elapsed: 46.434982ms
Feb 18 22:37:53.155: INFO: Pod "pod-configmaps-ff15857c-ffa0-477a-9f27-f043cf6cc2f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060956964s
Feb 18 22:37:55.161: INFO: Pod "pod-configmaps-ff15857c-ffa0-477a-9f27-f043cf6cc2f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066855439s
Feb 18 22:37:57.169: INFO: Pod "pod-configmaps-ff15857c-ffa0-477a-9f27-f043cf6cc2f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074426589s
Feb 18 22:37:59.191: INFO: Pod "pod-configmaps-ff15857c-ffa0-477a-9f27-f043cf6cc2f2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09700582s
Feb 18 22:38:01.199: INFO: Pod "pod-configmaps-ff15857c-ffa0-477a-9f27-f043cf6cc2f2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.104652576s
Feb 18 22:38:03.226: INFO: Pod "pod-configmaps-ff15857c-ffa0-477a-9f27-f043cf6cc2f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.131587818s
STEP: Saw pod success
Feb 18 22:38:03.226: INFO: Pod "pod-configmaps-ff15857c-ffa0-477a-9f27-f043cf6cc2f2" satisfied condition "success or failure"
Feb 18 22:38:03.230: INFO: Trying to get logs from node jerma-node pod pod-configmaps-ff15857c-ffa0-477a-9f27-f043cf6cc2f2 container env-test: 
STEP: delete the pod
Feb 18 22:38:03.323: INFO: Waiting for pod pod-configmaps-ff15857c-ffa0-477a-9f27-f043cf6cc2f2 to disappear
Feb 18 22:38:03.370: INFO: Pod pod-configmaps-ff15857c-ffa0-477a-9f27-f043cf6cc2f2 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:38:03.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5644" for this suite.

• [SLOW TEST:12.424 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3485,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:38:03.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb 18 22:38:03.539: INFO: Waiting up to 5m0s for pod "downward-api-04e4a87d-cf2f-4064-adc5-ae3cb0f66c80" in namespace "downward-api-4238" to be "success or failure"
Feb 18 22:38:03.593: INFO: Pod "downward-api-04e4a87d-cf2f-4064-adc5-ae3cb0f66c80": Phase="Pending", Reason="", readiness=false. Elapsed: 54.164003ms
Feb 18 22:38:05.601: INFO: Pod "downward-api-04e4a87d-cf2f-4064-adc5-ae3cb0f66c80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061673263s
Feb 18 22:38:07.607: INFO: Pod "downward-api-04e4a87d-cf2f-4064-adc5-ae3cb0f66c80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067771468s
Feb 18 22:38:09.613: INFO: Pod "downward-api-04e4a87d-cf2f-4064-adc5-ae3cb0f66c80": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074082503s
Feb 18 22:38:11.620: INFO: Pod "downward-api-04e4a87d-cf2f-4064-adc5-ae3cb0f66c80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081496066s
STEP: Saw pod success
Feb 18 22:38:11.621: INFO: Pod "downward-api-04e4a87d-cf2f-4064-adc5-ae3cb0f66c80" satisfied condition "success or failure"
Feb 18 22:38:11.625: INFO: Trying to get logs from node jerma-node pod downward-api-04e4a87d-cf2f-4064-adc5-ae3cb0f66c80 container dapi-container: 
STEP: delete the pod
Feb 18 22:38:11.682: INFO: Waiting for pod downward-api-04e4a87d-cf2f-4064-adc5-ae3cb0f66c80 to disappear
Feb 18 22:38:11.689: INFO: Pod downward-api-04e4a87d-cf2f-4064-adc5-ae3cb0f66c80 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:38:11.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4238" for this suite.

• [SLOW TEST:8.315 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3524,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:38:11.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 22:38:12.577: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 18 22:38:14.607: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:38:17.675: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:38:18.714: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:38:20.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:38:22.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662292, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 22:38:25.664: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Feb 18 22:38:25.706: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:38:26.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7582" for this suite.
STEP: Destroying namespace "webhook-7582-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.653 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":206,"skipped":3556,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:38:26.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 18 22:38:26.489: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7764b37a-7223-47b9-89a6-ad3b2bdf57e2" in namespace "projected-4663" to be "success or failure"
Feb 18 22:38:26.511: INFO: Pod "downwardapi-volume-7764b37a-7223-47b9-89a6-ad3b2bdf57e2": Phase="Pending", Reason="", readiness=false. Elapsed: 21.285709ms
Feb 18 22:38:28.525: INFO: Pod "downwardapi-volume-7764b37a-7223-47b9-89a6-ad3b2bdf57e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035467812s
Feb 18 22:38:30.537: INFO: Pod "downwardapi-volume-7764b37a-7223-47b9-89a6-ad3b2bdf57e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047061766s
Feb 18 22:38:32.547: INFO: Pod "downwardapi-volume-7764b37a-7223-47b9-89a6-ad3b2bdf57e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057161322s
Feb 18 22:38:34.575: INFO: Pod "downwardapi-volume-7764b37a-7223-47b9-89a6-ad3b2bdf57e2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08563263s
Feb 18 22:38:36.585: INFO: Pod "downwardapi-volume-7764b37a-7223-47b9-89a6-ad3b2bdf57e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095263959s
STEP: Saw pod success
Feb 18 22:38:36.585: INFO: Pod "downwardapi-volume-7764b37a-7223-47b9-89a6-ad3b2bdf57e2" satisfied condition "success or failure"
Feb 18 22:38:36.589: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-7764b37a-7223-47b9-89a6-ad3b2bdf57e2 container client-container: 
STEP: delete the pod
Feb 18 22:38:36.827: INFO: Waiting for pod downwardapi-volume-7764b37a-7223-47b9-89a6-ad3b2bdf57e2 to disappear
Feb 18 22:38:36.839: INFO: Pod downwardapi-volume-7764b37a-7223-47b9-89a6-ad3b2bdf57e2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:38:36.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4663" for this suite.

• [SLOW TEST:10.533 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3600,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:38:36.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 22:38:37.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662317, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662317, loc:(*time.Location)(0x7d100a0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}}, CollisionCount:(*int32)(nil)}
Feb 18 22:38:39.451: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662317, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662317, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662317, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662317, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:38:41.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662317, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662317, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662317, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662317, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:38:43.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662317, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662317, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662317, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662317, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 22:38:48.032: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:38:48.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3520" for this suite.
STEP: Destroying namespace "webhook-3520-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.163 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":208,"skipped":3613,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:38:50.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:39:02.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7512" for this suite.

• [SLOW TEST:12.148 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3638,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:39:02.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-1118
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 18 22:39:02.337: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 18 22:39:42.514: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.2:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1118 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 22:39:42.514: INFO: >>> kubeConfig: /root/.kube/config
I0218 22:39:42.603599       8 log.go:172] (0xc0023f5ad0) (0xc001646140) Create stream
I0218 22:39:42.603830       8 log.go:172] (0xc0023f5ad0) (0xc001646140) Stream added, broadcasting: 1
I0218 22:39:42.611796       8 log.go:172] (0xc0023f5ad0) Reply frame received for 1
I0218 22:39:42.611865       8 log.go:172] (0xc0023f5ad0) (0xc0012dc140) Create stream
I0218 22:39:42.611878       8 log.go:172] (0xc0023f5ad0) (0xc0012dc140) Stream added, broadcasting: 3
I0218 22:39:42.613545       8 log.go:172] (0xc0023f5ad0) Reply frame received for 3
I0218 22:39:42.613578       8 log.go:172] (0xc0023f5ad0) (0xc001cc1f40) Create stream
I0218 22:39:42.613590       8 log.go:172] (0xc0023f5ad0) (0xc001cc1f40) Stream added, broadcasting: 5
I0218 22:39:42.617791       8 log.go:172] (0xc0023f5ad0) Reply frame received for 5
I0218 22:39:42.726216       8 log.go:172] (0xc0023f5ad0) Data frame received for 3
I0218 22:39:42.726273       8 log.go:172] (0xc0012dc140) (3) Data frame handling
I0218 22:39:42.726300       8 log.go:172] (0xc0012dc140) (3) Data frame sent
I0218 22:39:42.788300       8 log.go:172] (0xc0023f5ad0) Data frame received for 1
I0218 22:39:42.788426       8 log.go:172] (0xc001646140) (1) Data frame handling
I0218 22:39:42.788456       8 log.go:172] (0xc001646140) (1) Data frame sent
I0218 22:39:42.788487       8 log.go:172] (0xc0023f5ad0) (0xc0012dc140) Stream removed, broadcasting: 3
I0218 22:39:42.788574       8 log.go:172] (0xc0023f5ad0) (0xc001cc1f40) Stream removed, broadcasting: 5
I0218 22:39:42.788600       8 log.go:172] (0xc0023f5ad0) (0xc001646140) Stream removed, broadcasting: 1
I0218 22:39:42.788816       8 log.go:172] (0xc0023f5ad0) Go away received
I0218 22:39:42.788855       8 log.go:172] (0xc0023f5ad0) (0xc001646140) Stream removed, broadcasting: 1
I0218 22:39:42.788863       8 log.go:172] (0xc0023f5ad0) (0xc0012dc140) Stream removed, broadcasting: 3
I0218 22:39:42.788870       8 log.go:172] (0xc0023f5ad0) (0xc001cc1f40) Stream removed, broadcasting: 5
Feb 18 22:39:42.788: INFO: Found all expected endpoints: [netserver-0]
Feb 18 22:39:42.796: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1118 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 22:39:42.796: INFO: >>> kubeConfig: /root/.kube/config
I0218 22:39:42.849052       8 log.go:172] (0xc0011ae210) (0xc000a8cd20) Create stream
I0218 22:39:42.849482       8 log.go:172] (0xc0011ae210) (0xc000a8cd20) Stream added, broadcasting: 1
I0218 22:39:42.860781       8 log.go:172] (0xc0011ae210) Reply frame received for 1
I0218 22:39:42.861351       8 log.go:172] (0xc0011ae210) (0xc0005c6fa0) Create stream
I0218 22:39:42.861402       8 log.go:172] (0xc0011ae210) (0xc0005c6fa0) Stream added, broadcasting: 3
I0218 22:39:42.868583       8 log.go:172] (0xc0011ae210) Reply frame received for 3
I0218 22:39:42.868658       8 log.go:172] (0xc0011ae210) (0xc001159860) Create stream
I0218 22:39:42.868677       8 log.go:172] (0xc0011ae210) (0xc001159860) Stream added, broadcasting: 5
I0218 22:39:42.874304       8 log.go:172] (0xc0011ae210) Reply frame received for 5
I0218 22:39:42.952451       8 log.go:172] (0xc0011ae210) Data frame received for 3
I0218 22:39:42.952532       8 log.go:172] (0xc0005c6fa0) (3) Data frame handling
I0218 22:39:42.952574       8 log.go:172] (0xc0005c6fa0) (3) Data frame sent
I0218 22:39:43.021710       8 log.go:172] (0xc0011ae210) Data frame received for 1
I0218 22:39:43.021780       8 log.go:172] (0xc0011ae210) (0xc001159860) Stream removed, broadcasting: 5
I0218 22:39:43.021854       8 log.go:172] (0xc000a8cd20) (1) Data frame handling
I0218 22:39:43.021893       8 log.go:172] (0xc000a8cd20) (1) Data frame sent
I0218 22:39:43.021933       8 log.go:172] (0xc0011ae210) (0xc0005c6fa0) Stream removed, broadcasting: 3
I0218 22:39:43.021971       8 log.go:172] (0xc0011ae210) (0xc000a8cd20) Stream removed, broadcasting: 1
I0218 22:39:43.021998       8 log.go:172] (0xc0011ae210) Go away received
I0218 22:39:43.022213       8 log.go:172] (0xc0011ae210) (0xc000a8cd20) Stream removed, broadcasting: 1
I0218 22:39:43.022234       8 log.go:172] (0xc0011ae210) (0xc0005c6fa0) Stream removed, broadcasting: 3
I0218 22:39:43.022245       8 log.go:172] (0xc0011ae210) (0xc001159860) Stream removed, broadcasting: 5
Feb 18 22:39:43.022: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:39:43.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1118" for this suite.

• [SLOW TEST:40.832 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3655,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:39:43.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Feb 18 22:39:43.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3746'
Feb 18 22:39:46.276: INFO: stderr: ""
Feb 18 22:39:46.276: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 18 22:39:46.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3746'
Feb 18 22:39:46.419: INFO: stderr: ""
Feb 18 22:39:46.419: INFO: stdout: "update-demo-nautilus-95mz4 update-demo-nautilus-cvbmp "
Feb 18 22:39:46.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-95mz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3746'
Feb 18 22:39:46.575: INFO: stderr: ""
Feb 18 22:39:46.575: INFO: stdout: ""
Feb 18 22:39:46.575: INFO: update-demo-nautilus-95mz4 is created but not running
Feb 18 22:39:51.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3746'
Feb 18 22:39:51.780: INFO: stderr: ""
Feb 18 22:39:51.780: INFO: stdout: "update-demo-nautilus-95mz4 update-demo-nautilus-cvbmp "
Feb 18 22:39:51.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-95mz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3746'
Feb 18 22:39:52.268: INFO: stderr: ""
Feb 18 22:39:52.268: INFO: stdout: ""
Feb 18 22:39:52.268: INFO: update-demo-nautilus-95mz4 is created but not running
Feb 18 22:39:57.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3746'
Feb 18 22:39:58.094: INFO: stderr: ""
Feb 18 22:39:58.094: INFO: stdout: "update-demo-nautilus-95mz4 update-demo-nautilus-cvbmp "
Feb 18 22:39:58.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-95mz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3746'
Feb 18 22:39:58.245: INFO: stderr: ""
Feb 18 22:39:58.246: INFO: stdout: ""
Feb 18 22:39:58.246: INFO: update-demo-nautilus-95mz4 is created but not running
Feb 18 22:40:03.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3746'
Feb 18 22:40:03.404: INFO: stderr: ""
Feb 18 22:40:03.404: INFO: stdout: "update-demo-nautilus-95mz4 update-demo-nautilus-cvbmp "
Feb 18 22:40:03.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-95mz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3746'
Feb 18 22:40:03.575: INFO: stderr: ""
Feb 18 22:40:03.575: INFO: stdout: "true"
Feb 18 22:40:03.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-95mz4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3746'
Feb 18 22:40:03.666: INFO: stderr: ""
Feb 18 22:40:03.666: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 18 22:40:03.666: INFO: validating pod update-demo-nautilus-95mz4
Feb 18 22:40:03.692: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 18 22:40:03.692: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 18 22:40:03.692: INFO: update-demo-nautilus-95mz4 is verified up and running
Feb 18 22:40:03.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cvbmp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3746'
Feb 18 22:40:03.809: INFO: stderr: ""
Feb 18 22:40:03.809: INFO: stdout: "true"
Feb 18 22:40:03.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cvbmp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3746'
Feb 18 22:40:03.943: INFO: stderr: ""
Feb 18 22:40:03.943: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 18 22:40:03.943: INFO: validating pod update-demo-nautilus-cvbmp
Feb 18 22:40:03.956: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 18 22:40:03.956: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 18 22:40:03.956: INFO: update-demo-nautilus-cvbmp is verified up and running
STEP: rolling-update to new replication controller
Feb 18 22:40:03.960: INFO: scanned /root for discovery docs: 
Feb 18 22:40:03.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3746'
Feb 18 22:40:35.636: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 18 22:40:35.636: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 18 22:40:35.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3746'
Feb 18 22:40:35.787: INFO: stderr: ""
Feb 18 22:40:35.787: INFO: stdout: "update-demo-kitten-6czqg update-demo-kitten-7kzt7 "
Feb 18 22:40:35.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6czqg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3746'
Feb 18 22:40:35.929: INFO: stderr: ""
Feb 18 22:40:35.929: INFO: stdout: "true"
Feb 18 22:40:35.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6czqg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3746'
Feb 18 22:40:36.072: INFO: stderr: ""
Feb 18 22:40:36.072: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 18 22:40:36.072: INFO: validating pod update-demo-kitten-6czqg
Feb 18 22:40:36.086: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 18 22:40:36.086: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 18 22:40:36.086: INFO: update-demo-kitten-6czqg is verified up and running
Feb 18 22:40:36.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7kzt7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3746'
Feb 18 22:40:36.197: INFO: stderr: ""
Feb 18 22:40:36.197: INFO: stdout: "true"
Feb 18 22:40:36.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7kzt7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3746'
Feb 18 22:40:36.341: INFO: stderr: ""
Feb 18 22:40:36.341: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 18 22:40:36.341: INFO: validating pod update-demo-kitten-7kzt7
Feb 18 22:40:36.348: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 18 22:40:36.348: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 18 22:40:36.348: INFO: update-demo-kitten-7kzt7 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:40:36.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3746" for this suite.

• [SLOW TEST:53.323 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":211,"skipped":3670,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:40:36.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb 18 22:40:48.481: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb 18 22:41:03.594: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:41:03.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2075" for this suite.

• [SLOW TEST:27.267 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":212,"skipped":3672,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:41:03.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:41:21.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3496" for this suite.

• [SLOW TEST:17.727 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":213,"skipped":3690,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:41:21.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Feb 18 22:41:21.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:41:39.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8572" for this suite.

• [SLOW TEST:18.655 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":214,"skipped":3702,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:41:40.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-adf3b050-a486-416c-ac53-c29942912221 in namespace container-probe-725
Feb 18 22:41:48.130: INFO: Started pod busybox-adf3b050-a486-416c-ac53-c29942912221 in namespace container-probe-725
STEP: checking the pod's current state and verifying that restartCount is present
Feb 18 22:41:48.138: INFO: Initial restart count of pod busybox-adf3b050-a486-416c-ac53-c29942912221 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:45:49.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-725" for this suite.

• [SLOW TEST:249.567 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3709,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:45:49.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 18 22:45:49.730: INFO: Number of nodes with available pods: 0
Feb 18 22:45:49.730: INFO: Node jerma-node is running more than one daemon pod
Feb 18 22:45:50.741: INFO: Number of nodes with available pods: 0
Feb 18 22:45:50.741: INFO: Node jerma-node is running more than one daemon pod
Feb 18 22:45:51.760: INFO: Number of nodes with available pods: 0
Feb 18 22:45:51.760: INFO: Node jerma-node is running more than one daemon pod
Feb 18 22:45:52.878: INFO: Number of nodes with available pods: 0
Feb 18 22:45:52.878: INFO: Node jerma-node is running more than one daemon pod
Feb 18 22:45:53.749: INFO: Number of nodes with available pods: 0
Feb 18 22:45:53.749: INFO: Node jerma-node is running more than one daemon pod
Feb 18 22:45:54.760: INFO: Number of nodes with available pods: 0
Feb 18 22:45:54.760: INFO: Node jerma-node is running more than one daemon pod
Feb 18 22:45:56.969: INFO: Number of nodes with available pods: 0
Feb 18 22:45:56.969: INFO: Node jerma-node is running more than one daemon pod
Feb 18 22:45:57.769: INFO: Number of nodes with available pods: 0
Feb 18 22:45:57.769: INFO: Node jerma-node is running more than one daemon pod
Feb 18 22:45:59.761: INFO: Number of nodes with available pods: 0
Feb 18 22:45:59.761: INFO: Node jerma-node is running more than one daemon pod
Feb 18 22:46:00.744: INFO: Number of nodes with available pods: 0
Feb 18 22:46:00.744: INFO: Node jerma-node is running more than one daemon pod
Feb 18 22:46:01.748: INFO: Number of nodes with available pods: 2
Feb 18 22:46:01.748: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 18 22:46:01.901: INFO: Number of nodes with available pods: 1
Feb 18 22:46:01.902: INFO: Node jerma-node is running more than one daemon pod
Feb 18 22:46:02.914: INFO: Number of nodes with available pods: 1
Feb 18 22:46:02.914: INFO: Node jerma-node is running more than one daemon pod
Feb 18 22:46:03.930: INFO: Number of nodes with available pods: 1
Feb 18 22:46:03.930: INFO: Node jerma-node is running more than one daemon pod
Feb 18 22:46:04.920: INFO: Number of nodes with available pods: 1
Feb 18 22:46:04.920: INFO: Node jerma-node is running more than one daemon pod
Feb 18 22:46:05.921: INFO: Number of nodes with available pods: 1
Feb 18 22:46:05.921: INFO: Node jerma-node is running more than one daemon pod
Feb 18 22:46:06.927: INFO: Number of nodes with available pods: 1
Feb 18 22:46:06.927: INFO: Node jerma-node is running more than one daemon pod
Feb 18 22:46:07.917: INFO: Number of nodes with available pods: 1
Feb 18 22:46:07.917: INFO: Node jerma-node is running more than one daemon pod
Feb 18 22:46:08.920: INFO: Number of nodes with available pods: 2
Feb 18 22:46:08.920: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6848, will wait for the garbage collector to delete the pods
Feb 18 22:46:08.992: INFO: Deleting DaemonSet.extensions daemon-set took: 7.207756ms
Feb 18 22:46:09.292: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.618168ms
Feb 18 22:46:23.097: INFO: Number of nodes with available pods: 0
Feb 18 22:46:23.097: INFO: Number of running nodes: 0, number of available pods: 0
Feb 18 22:46:23.100: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6848/daemonsets","resourceVersion":"9283767"},"items":null}

Feb 18 22:46:23.102: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6848/pods","resourceVersion":"9283767"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:46:23.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6848" for this suite.

• [SLOW TEST:33.600 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":216,"skipped":3717,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:46:23.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0218 22:46:34.433833       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 18 22:46:34.433: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:46:34.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4414" for this suite.

• [SLOW TEST:11.267 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":217,"skipped":3733,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:46:34.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Feb 18 22:46:39.836: INFO: mount-test service account has no secret references
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb 18 22:47:03.639: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-953 pod-service-account-63f8a3a2-0629-43f3-a1cd-059446da05fc -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb 18 22:47:04.147: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-953 pod-service-account-63f8a3a2-0629-43f3-a1cd-059446da05fc -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb 18 22:47:04.483: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-953 pod-service-account-63f8a3a2-0629-43f3-a1cd-059446da05fc -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:47:04.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-953" for this suite.

• [SLOW TEST:30.409 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":218,"skipped":3777,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:47:04.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:47:18.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5988" for this suite.

• [SLOW TEST:13.571 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":219,"skipped":3778,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:47:18.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 22:47:18.612: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-b9eabacd-9222-4b73-ae18-ff1db34dcd16" in namespace "security-context-test-910" to be "success or failure"
Feb 18 22:47:18.628: INFO: Pod "busybox-readonly-false-b9eabacd-9222-4b73-ae18-ff1db34dcd16": Phase="Pending", Reason="", readiness=false. Elapsed: 15.66408ms
Feb 18 22:47:20.639: INFO: Pod "busybox-readonly-false-b9eabacd-9222-4b73-ae18-ff1db34dcd16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027128876s
Feb 18 22:47:22.750: INFO: Pod "busybox-readonly-false-b9eabacd-9222-4b73-ae18-ff1db34dcd16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137530788s
Feb 18 22:47:24.810: INFO: Pod "busybox-readonly-false-b9eabacd-9222-4b73-ae18-ff1db34dcd16": Phase="Pending", Reason="", readiness=false. Elapsed: 6.197527466s
Feb 18 22:47:26.846: INFO: Pod "busybox-readonly-false-b9eabacd-9222-4b73-ae18-ff1db34dcd16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.233205318s
Feb 18 22:47:26.846: INFO: Pod "busybox-readonly-false-b9eabacd-9222-4b73-ae18-ff1db34dcd16" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:47:26.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-910" for this suite.

• [SLOW TEST:8.453 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3782,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:47:26.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-4de82a4e-c36e-45c4-8c5e-9a058ca084b7
STEP: Creating a pod to test consume configMaps
Feb 18 22:47:27.068: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a59b6b8-a066-4af8-8e53-b56a7ec75e6c" in namespace "configmap-2017" to be "success or failure"
Feb 18 22:47:27.111: INFO: Pod "pod-configmaps-1a59b6b8-a066-4af8-8e53-b56a7ec75e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 42.920884ms
Feb 18 22:47:29.119: INFO: Pod "pod-configmaps-1a59b6b8-a066-4af8-8e53-b56a7ec75e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050827834s
Feb 18 22:47:31.124: INFO: Pod "pod-configmaps-1a59b6b8-a066-4af8-8e53-b56a7ec75e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055775221s
Feb 18 22:47:33.128: INFO: Pod "pod-configmaps-1a59b6b8-a066-4af8-8e53-b56a7ec75e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060257722s
Feb 18 22:47:35.196: INFO: Pod "pod-configmaps-1a59b6b8-a066-4af8-8e53-b56a7ec75e6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.127816676s
STEP: Saw pod success
Feb 18 22:47:35.196: INFO: Pod "pod-configmaps-1a59b6b8-a066-4af8-8e53-b56a7ec75e6c" satisfied condition "success or failure"
Feb 18 22:47:35.200: INFO: Trying to get logs from node jerma-node pod pod-configmaps-1a59b6b8-a066-4af8-8e53-b56a7ec75e6c container configmap-volume-test: 
STEP: delete the pod
Feb 18 22:47:35.255: INFO: Waiting for pod pod-configmaps-1a59b6b8-a066-4af8-8e53-b56a7ec75e6c to disappear
Feb 18 22:47:35.273: INFO: Pod pod-configmaps-1a59b6b8-a066-4af8-8e53-b56a7ec75e6c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:47:35.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2017" for this suite.

• [SLOW TEST:8.407 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3783,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:47:35.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 22:47:35.405: INFO: Creating deployment "test-recreate-deployment"
Feb 18 22:47:35.423: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 18 22:47:35.440: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb 18 22:47:37.453: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 18 22:47:37.456: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662855, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662855, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662855, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662855, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:47:39.463: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662855, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662855, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662855, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662855, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:47:41.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662855, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662855, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662855, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717662855, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:47:43.463: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 18 22:47:43.471: INFO: Updating deployment test-recreate-deployment
Feb 18 22:47:43.471: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb 18 22:47:43.720: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-4613 /apis/apps/v1/namespaces/deployment-4613/deployments/test-recreate-deployment 32a88d97-152d-4f8b-a3c6-f3b9fecd0b79 9284228 2 2020-02-18 22:47:35 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0025641d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-18 22:47:43 +0000 UTC,LastTransitionTime:2020-02-18 22:47:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-02-18 22:47:43 +0000 UTC,LastTransitionTime:2020-02-18 22:47:35 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Feb 18 22:47:43.725: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-4613 /apis/apps/v1/namespaces/deployment-4613/replicasets/test-recreate-deployment-5f94c574ff 51d3836c-6daa-420c-b0a8-e2f151437da8 9284226 1 2020-02-18 22:47:43 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 32a88d97-152d-4f8b-a3c6-f3b9fecd0b79 0xc004ff2657 0xc004ff2658}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004ff26b8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 18 22:47:43.725: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 18 22:47:43.725: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-4613 /apis/apps/v1/namespaces/deployment-4613/replicasets/test-recreate-deployment-799c574856 ff034a74-7356-48b9-9ea3-f323ee0f017e 9284217 2 2020-02-18 22:47:35 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 32a88d97-152d-4f8b-a3c6-f3b9fecd0b79 0xc004ff2727 0xc004ff2728}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004ff2798  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 18 22:47:43.816: INFO: Pod "test-recreate-deployment-5f94c574ff-9gzx9" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-9gzx9 test-recreate-deployment-5f94c574ff- deployment-4613 /api/v1/namespaces/deployment-4613/pods/test-recreate-deployment-5f94c574ff-9gzx9 25fd0b78-dcc9-4040-98a9-fec3ca50df08 9284229 0 2020-02-18 22:47:43 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 51d3836c-6daa-420c-b0a8-e2f151437da8 0xc002430917 0xc002430918}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg5tk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg5tk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg5tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 22:47:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 22:47:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 22:47:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 22:47:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-18 22:47:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:47:43.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4613" for this suite.

• [SLOW TEST:8.552 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":222,"skipped":3785,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:47:43.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 22:47:44.050: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 18 22:47:49.059: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 18 22:47:57.076: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb 18 22:47:57.110: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-4716 /apis/apps/v1/namespaces/deployment-4716/deployments/test-cleanup-deployment 18550d19-ef60-44f4-b27b-2e9d769f2d52 9284299 1 2020-02-18 22:47:57 +0000 UTC   map[name:cleanup-pod] map[] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00506cec8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Feb 18 22:47:57.150: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-4716 /apis/apps/v1/namespaces/deployment-4716/replicasets/test-cleanup-deployment-55ffc6b7b6 fe98503c-b598-43c2-9d1e-5f34ade938d2 9284301 1 2020-02-18 22:47:57 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 18550d19-ef60-44f4-b27b-2e9d769f2d52 0xc003c24397 0xc003c24398}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c24408  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 18 22:47:57.150: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb 18 22:47:57.150: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-4716 /apis/apps/v1/namespaces/deployment-4716/replicasets/test-cleanup-controller d5b22843-dae5-4f77-b7ae-60d29a4b5660 9284300 1 2020-02-18 22:47:44 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 18550d19-ef60-44f4-b27b-2e9d769f2d52 0xc003c242c7 0xc003c242c8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003c24328  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 18 22:47:57.249: INFO: Pod "test-cleanup-controller-v5flf" is available:
&Pod{ObjectMeta:{test-cleanup-controller-v5flf test-cleanup-controller- deployment-4716 /api/v1/namespaces/deployment-4716/pods/test-cleanup-controller-v5flf d6bc2ee9-7d56-419a-98aa-6f828fbb1920 9284293 0 2020-02-18 22:47:44 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller d5b22843-dae5-4f77-b7ae-60d29a4b5660 0xc00506d307 0xc00506d308}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-scdz4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-scdz4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-scdz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 22:47:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 22:47:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 22:47:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 22:47:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-18 22:47:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 22:47:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://19184d58668172181b2645900d9cae202df89824b61bd704cc4c4e88d6b2dfec,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 18 22:47:57.249: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-4pbd8" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-4pbd8 test-cleanup-deployment-55ffc6b7b6- deployment-4716 /api/v1/namespaces/deployment-4716/pods/test-cleanup-deployment-55ffc6b7b6-4pbd8 426f8793-e303-49b7-9497-32e97eb388e8 9284307 0 2020-02-18 22:47:57 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 fe98503c-b598-43c2-9d1e-5f34ade938d2 0xc00506d487 0xc00506d488}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-scdz4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-scdz4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-scdz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 22:47:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:47:57.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4716" for this suite.

• [SLOW TEST:13.485 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":223,"skipped":3793,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:47:57.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:48:13.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3932" for this suite.

• [SLOW TEST:16.503 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":224,"skipped":3794,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:48:13.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 18 22:48:14.048: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e49f4c16-330e-4a9d-8785-b80ed92dcd64" in namespace "downward-api-5689" to be "success or failure"
Feb 18 22:48:14.116: INFO: Pod "downwardapi-volume-e49f4c16-330e-4a9d-8785-b80ed92dcd64": Phase="Pending", Reason="", readiness=false. Elapsed: 67.788983ms
Feb 18 22:48:16.122: INFO: Pod "downwardapi-volume-e49f4c16-330e-4a9d-8785-b80ed92dcd64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073318492s
Feb 18 22:48:18.127: INFO: Pod "downwardapi-volume-e49f4c16-330e-4a9d-8785-b80ed92dcd64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078942279s
Feb 18 22:48:20.953: INFO: Pod "downwardapi-volume-e49f4c16-330e-4a9d-8785-b80ed92dcd64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.905014731s
Feb 18 22:48:22.959: INFO: Pod "downwardapi-volume-e49f4c16-330e-4a9d-8785-b80ed92dcd64": Phase="Pending", Reason="", readiness=false. Elapsed: 8.910903727s
Feb 18 22:48:24.964: INFO: Pod "downwardapi-volume-e49f4c16-330e-4a9d-8785-b80ed92dcd64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.915330391s
STEP: Saw pod success
Feb 18 22:48:24.964: INFO: Pod "downwardapi-volume-e49f4c16-330e-4a9d-8785-b80ed92dcd64" satisfied condition "success or failure"
Feb 18 22:48:24.966: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-e49f4c16-330e-4a9d-8785-b80ed92dcd64 container client-container: 
STEP: delete the pod
Feb 18 22:48:25.020: INFO: Waiting for pod downwardapi-volume-e49f4c16-330e-4a9d-8785-b80ed92dcd64 to disappear
Feb 18 22:48:25.025: INFO: Pod downwardapi-volume-e49f4c16-330e-4a9d-8785-b80ed92dcd64 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:48:25.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5689" for this suite.

• [SLOW TEST:11.205 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3835,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:48:25.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-x2nj
STEP: Creating a pod to test atomic-volume-subpath
Feb 18 22:48:25.347: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-x2nj" in namespace "subpath-6474" to be "success or failure"
Feb 18 22:48:25.356: INFO: Pod "pod-subpath-test-projected-x2nj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.523801ms
Feb 18 22:48:27.376: INFO: Pod "pod-subpath-test-projected-x2nj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029086279s
Feb 18 22:48:29.382: INFO: Pod "pod-subpath-test-projected-x2nj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03544369s
Feb 18 22:48:31.393: INFO: Pod "pod-subpath-test-projected-x2nj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046273231s
Feb 18 22:48:33.401: INFO: Pod "pod-subpath-test-projected-x2nj": Phase="Running", Reason="", readiness=true. Elapsed: 8.053979813s
Feb 18 22:48:35.408: INFO: Pod "pod-subpath-test-projected-x2nj": Phase="Running", Reason="", readiness=true. Elapsed: 10.061076319s
Feb 18 22:48:37.415: INFO: Pod "pod-subpath-test-projected-x2nj": Phase="Running", Reason="", readiness=true. Elapsed: 12.067939831s
Feb 18 22:48:39.424: INFO: Pod "pod-subpath-test-projected-x2nj": Phase="Running", Reason="", readiness=true. Elapsed: 14.076742027s
Feb 18 22:48:41.434: INFO: Pod "pod-subpath-test-projected-x2nj": Phase="Running", Reason="", readiness=true. Elapsed: 16.087282561s
Feb 18 22:48:43.441: INFO: Pod "pod-subpath-test-projected-x2nj": Phase="Running", Reason="", readiness=true. Elapsed: 18.094293048s
Feb 18 22:48:45.450: INFO: Pod "pod-subpath-test-projected-x2nj": Phase="Running", Reason="", readiness=true. Elapsed: 20.103248629s
Feb 18 22:48:47.460: INFO: Pod "pod-subpath-test-projected-x2nj": Phase="Running", Reason="", readiness=true. Elapsed: 22.112565435s
Feb 18 22:48:49.472: INFO: Pod "pod-subpath-test-projected-x2nj": Phase="Running", Reason="", readiness=true. Elapsed: 24.124640776s
Feb 18 22:48:51.480: INFO: Pod "pod-subpath-test-projected-x2nj": Phase="Running", Reason="", readiness=true. Elapsed: 26.132806247s
Feb 18 22:48:53.495: INFO: Pod "pod-subpath-test-projected-x2nj": Phase="Running", Reason="", readiness=true. Elapsed: 28.147518614s
Feb 18 22:48:55.501: INFO: Pod "pod-subpath-test-projected-x2nj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.15412503s
STEP: Saw pod success
Feb 18 22:48:55.501: INFO: Pod "pod-subpath-test-projected-x2nj" satisfied condition "success or failure"
Feb 18 22:48:55.505: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-x2nj container test-container-subpath-projected-x2nj: 
STEP: delete the pod
Feb 18 22:48:55.570: INFO: Waiting for pod pod-subpath-test-projected-x2nj to disappear
Feb 18 22:48:55.618: INFO: Pod pod-subpath-test-projected-x2nj no longer exists
STEP: Deleting pod pod-subpath-test-projected-x2nj
Feb 18 22:48:55.618: INFO: Deleting pod "pod-subpath-test-projected-x2nj" in namespace "subpath-6474"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:48:55.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6474" for this suite.

• [SLOW TEST:30.595 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":226,"skipped":3842,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:48:55.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Feb 18 22:48:55.864: INFO: namespace kubectl-940
Feb 18 22:48:55.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-940'
Feb 18 22:48:56.444: INFO: stderr: ""
Feb 18 22:48:56.444: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 18 22:48:57.452: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 22:48:57.452: INFO: Found 0 / 1
Feb 18 22:48:58.460: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 22:48:58.460: INFO: Found 0 / 1
Feb 18 22:48:59.463: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 22:48:59.463: INFO: Found 0 / 1
Feb 18 22:49:00.456: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 22:49:00.456: INFO: Found 0 / 1
Feb 18 22:49:01.458: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 22:49:01.458: INFO: Found 0 / 1
Feb 18 22:49:02.450: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 22:49:02.450: INFO: Found 0 / 1
Feb 18 22:49:03.453: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 22:49:03.453: INFO: Found 0 / 1
Feb 18 22:49:04.455: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 22:49:04.456: INFO: Found 1 / 1
Feb 18 22:49:04.456: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 18 22:49:04.460: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 22:49:04.460: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 18 22:49:04.460: INFO: wait on agnhost-master startup in kubectl-940 
Feb 18 22:49:04.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-fbw2p agnhost-master --namespace=kubectl-940'
Feb 18 22:49:04.628: INFO: stderr: ""
Feb 18 22:49:04.629: INFO: stdout: "Paused\n"
STEP: exposing RC
Feb 18 22:49:04.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-940'
Feb 18 22:49:04.797: INFO: stderr: ""
Feb 18 22:49:04.798: INFO: stdout: "service/rm2 exposed\n"
Feb 18 22:49:04.816: INFO: Service rm2 in namespace kubectl-940 found.
STEP: exposing service
Feb 18 22:49:06.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-940'
Feb 18 22:49:07.019: INFO: stderr: ""
Feb 18 22:49:07.019: INFO: stdout: "service/rm3 exposed\n"
Feb 18 22:49:07.025: INFO: Service rm3 in namespace kubectl-940 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:49:09.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-940" for this suite.

• [SLOW TEST:13.429 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":227,"skipped":3880,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:49:09.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0218 22:49:12.309279       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 18 22:49:12.309: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:49:12.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1453" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":228,"skipped":3883,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:49:12.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:50:05.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6466" for this suite.

• [SLOW TEST:52.956 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3897,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:50:05.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-rhpv
STEP: Creating a pod to test atomic-volume-subpath
Feb 18 22:50:05.533: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rhpv" in namespace "subpath-4789" to be "success or failure"
Feb 18 22:50:05.545: INFO: Pod "pod-subpath-test-downwardapi-rhpv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.141575ms
Feb 18 22:50:07.552: INFO: Pod "pod-subpath-test-downwardapi-rhpv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019031506s
Feb 18 22:50:09.608: INFO: Pod "pod-subpath-test-downwardapi-rhpv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075296681s
Feb 18 22:50:11.615: INFO: Pod "pod-subpath-test-downwardapi-rhpv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082842394s
Feb 18 22:50:13.625: INFO: Pod "pod-subpath-test-downwardapi-rhpv": Phase="Running", Reason="", readiness=true. Elapsed: 8.092393003s
Feb 18 22:50:15.633: INFO: Pod "pod-subpath-test-downwardapi-rhpv": Phase="Running", Reason="", readiness=true. Elapsed: 10.099936771s
Feb 18 22:50:17.639: INFO: Pod "pod-subpath-test-downwardapi-rhpv": Phase="Running", Reason="", readiness=true. Elapsed: 12.106516382s
Feb 18 22:50:19.647: INFO: Pod "pod-subpath-test-downwardapi-rhpv": Phase="Running", Reason="", readiness=true. Elapsed: 14.114606664s
Feb 18 22:50:22.051: INFO: Pod "pod-subpath-test-downwardapi-rhpv": Phase="Running", Reason="", readiness=true. Elapsed: 16.518241269s
Feb 18 22:50:24.058: INFO: Pod "pod-subpath-test-downwardapi-rhpv": Phase="Running", Reason="", readiness=true. Elapsed: 18.525779564s
Feb 18 22:50:27.394: INFO: Pod "pod-subpath-test-downwardapi-rhpv": Phase="Running", Reason="", readiness=true. Elapsed: 21.861322622s
Feb 18 22:50:29.402: INFO: Pod "pod-subpath-test-downwardapi-rhpv": Phase="Running", Reason="", readiness=true. Elapsed: 23.869334297s
Feb 18 22:50:31.410: INFO: Pod "pod-subpath-test-downwardapi-rhpv": Phase="Running", Reason="", readiness=true. Elapsed: 25.87736375s
Feb 18 22:50:33.417: INFO: Pod "pod-subpath-test-downwardapi-rhpv": Phase="Running", Reason="", readiness=true. Elapsed: 27.884079084s
Feb 18 22:50:35.424: INFO: Pod "pod-subpath-test-downwardapi-rhpv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.891561077s
STEP: Saw pod success
Feb 18 22:50:35.424: INFO: Pod "pod-subpath-test-downwardapi-rhpv" satisfied condition "success or failure"
Feb 18 22:50:35.431: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-rhpv container test-container-subpath-downwardapi-rhpv: 
STEP: delete the pod
Feb 18 22:50:35.515: INFO: Waiting for pod pod-subpath-test-downwardapi-rhpv to disappear
Feb 18 22:50:35.520: INFO: Pod pod-subpath-test-downwardapi-rhpv no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-rhpv
Feb 18 22:50:35.520: INFO: Deleting pod "pod-subpath-test-downwardapi-rhpv" in namespace "subpath-4789"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:50:35.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4789" for this suite.

• [SLOW TEST:30.311 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":230,"skipped":3903,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:50:35.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 22:50:36.674: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
Feb 18 22:50:38.688: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663036, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663036, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663036, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663036, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:50:40.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663036, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663036, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663036, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663036, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:50:42.699: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663036, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663036, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663036, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663036, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 22:50:45.740: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:50:45.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6754" for this suite.
STEP: Destroying namespace "webhook-6754-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.467 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":231,"skipped":3906,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:50:46.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-18d4ce25-91b3-4d40-b337-873e88e4072f
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-18d4ce25-91b3-4d40-b337-873e88e4072f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:52:04.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-707" for this suite.

• [SLOW TEST:78.146 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3919,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:52:04.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 22:52:04.328: INFO: Waiting up to 5m0s for pod "busybox-user-65534-e65d52ab-726a-47cb-b2e4-2d037133062c" in namespace "security-context-test-5640" to be "success or failure"
Feb 18 22:52:04.347: INFO: Pod "busybox-user-65534-e65d52ab-726a-47cb-b2e4-2d037133062c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.122391ms
Feb 18 22:52:07.662: INFO: Pod "busybox-user-65534-e65d52ab-726a-47cb-b2e4-2d037133062c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.333766221s
Feb 18 22:52:09.683: INFO: Pod "busybox-user-65534-e65d52ab-726a-47cb-b2e4-2d037133062c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.354528943s
Feb 18 22:52:11.690: INFO: Pod "busybox-user-65534-e65d52ab-726a-47cb-b2e4-2d037133062c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.361259191s
Feb 18 22:52:13.703: INFO: Pod "busybox-user-65534-e65d52ab-726a-47cb-b2e4-2d037133062c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.374838909s
Feb 18 22:52:15.711: INFO: Pod "busybox-user-65534-e65d52ab-726a-47cb-b2e4-2d037133062c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.382736606s
Feb 18 22:52:15.711: INFO: Pod "busybox-user-65534-e65d52ab-726a-47cb-b2e4-2d037133062c" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:52:15.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5640" for this suite.

• [SLOW TEST:11.533 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3945,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:52:15.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-c8aa3c71-dfcf-4421-9a21-be6932a1fecb
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:52:15.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4040" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":234,"skipped":3945,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:52:15.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 18 22:52:23.469: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:52:23.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5324" for this suite.

• [SLOW TEST:7.565 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3964,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:52:23.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 18 22:52:23.645: INFO: Waiting up to 5m0s for pod "pod-7543873c-05f4-4345-8505-fdc4b24a6b1e" in namespace "emptydir-6193" to be "success or failure"
Feb 18 22:52:23.649: INFO: Pod "pod-7543873c-05f4-4345-8505-fdc4b24a6b1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.599121ms
Feb 18 22:52:25.656: INFO: Pod "pod-7543873c-05f4-4345-8505-fdc4b24a6b1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011577641s
Feb 18 22:52:27.666: INFO: Pod "pod-7543873c-05f4-4345-8505-fdc4b24a6b1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020794933s
Feb 18 22:52:29.672: INFO: Pod "pod-7543873c-05f4-4345-8505-fdc4b24a6b1e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02766081s
Feb 18 22:52:31.680: INFO: Pod "pod-7543873c-05f4-4345-8505-fdc4b24a6b1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.034957915s
STEP: Saw pod success
Feb 18 22:52:31.680: INFO: Pod "pod-7543873c-05f4-4345-8505-fdc4b24a6b1e" satisfied condition "success or failure"
Feb 18 22:52:31.684: INFO: Trying to get logs from node jerma-node pod pod-7543873c-05f4-4345-8505-fdc4b24a6b1e container test-container: 
STEP: delete the pod
Feb 18 22:52:31.756: INFO: Waiting for pod pod-7543873c-05f4-4345-8505-fdc4b24a6b1e to disappear
Feb 18 22:52:31.762: INFO: Pod pod-7543873c-05f4-4345-8505-fdc4b24a6b1e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:52:31.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6193" for this suite.

• [SLOW TEST:8.224 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3983,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:52:31.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-bdd77fd9-9257-4fe1-a8b3-f23f28e1505a
STEP: Creating a pod to test consume secrets
Feb 18 22:52:31.937: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-39f47faa-9108-4c69-9298-dcfde2dae759" in namespace "projected-6622" to be "success or failure"
Feb 18 22:52:31.942: INFO: Pod "pod-projected-secrets-39f47faa-9108-4c69-9298-dcfde2dae759": Phase="Pending", Reason="", readiness=false. Elapsed: 4.450107ms
Feb 18 22:52:33.951: INFO: Pod "pod-projected-secrets-39f47faa-9108-4c69-9298-dcfde2dae759": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013542365s
Feb 18 22:52:35.956: INFO: Pod "pod-projected-secrets-39f47faa-9108-4c69-9298-dcfde2dae759": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018560742s
Feb 18 22:52:37.963: INFO: Pod "pod-projected-secrets-39f47faa-9108-4c69-9298-dcfde2dae759": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025329284s
Feb 18 22:52:39.974: INFO: Pod "pod-projected-secrets-39f47faa-9108-4c69-9298-dcfde2dae759": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.036646692s
STEP: Saw pod success
Feb 18 22:52:39.974: INFO: Pod "pod-projected-secrets-39f47faa-9108-4c69-9298-dcfde2dae759" satisfied condition "success or failure"
Feb 18 22:52:39.977: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-39f47faa-9108-4c69-9298-dcfde2dae759 container projected-secret-volume-test: 
STEP: delete the pod
Feb 18 22:52:40.055: INFO: Waiting for pod pod-projected-secrets-39f47faa-9108-4c69-9298-dcfde2dae759 to disappear
Feb 18 22:52:40.061: INFO: Pod pod-projected-secrets-39f47faa-9108-4c69-9298-dcfde2dae759 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:52:40.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6622" for this suite.

• [SLOW TEST:8.297 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3987,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:52:40.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb 18 22:52:41.193: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Feb 18 22:52:43.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663161, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663161, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663161, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663161, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:52:45.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663161, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663161, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663161, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663161, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:52:47.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663161, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663161, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663161, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663161, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 22:52:49.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663161, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663161, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663161, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717663161, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 22:52:52.333: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 22:52:52.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:52:53.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-3273" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:13.999 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":238,"skipped":3987,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:52:54.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-xm2j
STEP: Creating a pod to test atomic-volume-subpath
Feb 18 22:52:54.191: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xm2j" in namespace "subpath-6082" to be "success or failure"
Feb 18 22:52:54.202: INFO: Pod "pod-subpath-test-configmap-xm2j": Phase="Pending", Reason="", readiness=false. Elapsed: 11.821307ms
Feb 18 22:52:56.210: INFO: Pod "pod-subpath-test-configmap-xm2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019250586s
Feb 18 22:52:58.228: INFO: Pod "pod-subpath-test-configmap-xm2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036945112s
Feb 18 22:53:00.271: INFO: Pod "pod-subpath-test-configmap-xm2j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080528407s
Feb 18 22:53:02.313: INFO: Pod "pod-subpath-test-configmap-xm2j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122230467s
Feb 18 22:53:04.322: INFO: Pod "pod-subpath-test-configmap-xm2j": Phase="Running", Reason="", readiness=true. Elapsed: 10.131394243s
Feb 18 22:53:06.335: INFO: Pod "pod-subpath-test-configmap-xm2j": Phase="Running", Reason="", readiness=true. Elapsed: 12.1443278s
Feb 18 22:53:08.345: INFO: Pod "pod-subpath-test-configmap-xm2j": Phase="Running", Reason="", readiness=true. Elapsed: 14.154578144s
Feb 18 22:53:10.354: INFO: Pod "pod-subpath-test-configmap-xm2j": Phase="Running", Reason="", readiness=true. Elapsed: 16.163217663s
Feb 18 22:53:12.394: INFO: Pod "pod-subpath-test-configmap-xm2j": Phase="Running", Reason="", readiness=true. Elapsed: 18.202936746s
Feb 18 22:53:14.400: INFO: Pod "pod-subpath-test-configmap-xm2j": Phase="Running", Reason="", readiness=true. Elapsed: 20.209717805s
Feb 18 22:53:16.407: INFO: Pod "pod-subpath-test-configmap-xm2j": Phase="Running", Reason="", readiness=true. Elapsed: 22.215903668s
Feb 18 22:53:18.413: INFO: Pod "pod-subpath-test-configmap-xm2j": Phase="Running", Reason="", readiness=true. Elapsed: 24.222421922s
Feb 18 22:53:20.422: INFO: Pod "pod-subpath-test-configmap-xm2j": Phase="Running", Reason="", readiness=true. Elapsed: 26.231042061s
Feb 18 22:53:22.430: INFO: Pod "pod-subpath-test-configmap-xm2j": Phase="Running", Reason="", readiness=true. Elapsed: 28.238917618s
Feb 18 22:53:24.476: INFO: Pod "pod-subpath-test-configmap-xm2j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.285536431s
STEP: Saw pod success
Feb 18 22:53:24.476: INFO: Pod "pod-subpath-test-configmap-xm2j" satisfied condition "success or failure"
Feb 18 22:53:24.484: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-xm2j container test-container-subpath-configmap-xm2j: 
STEP: delete the pod
Feb 18 22:53:24.534: INFO: Waiting for pod pod-subpath-test-configmap-xm2j to disappear
Feb 18 22:53:24.567: INFO: Pod pod-subpath-test-configmap-xm2j no longer exists
STEP: Deleting pod pod-subpath-test-configmap-xm2j
Feb 18 22:53:24.567: INFO: Deleting pod "pod-subpath-test-configmap-xm2j" in namespace "subpath-6082"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:53:24.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6082" for this suite.

• [SLOW TEST:30.575 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":239,"skipped":4009,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:53:24.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 18 22:53:42.940: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 18 22:53:42.957: INFO: Pod pod-with-prestop-http-hook still exists
Feb 18 22:53:44.957: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 18 22:53:44.965: INFO: Pod pod-with-prestop-http-hook still exists
Feb 18 22:53:46.958: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 18 22:53:46.968: INFO: Pod pod-with-prestop-http-hook still exists
Feb 18 22:53:48.957: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 18 22:53:48.966: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:53:48.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5572" for this suite.

• [SLOW TEST:24.348 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":4012,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:53:48.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 18 22:53:49.096: INFO: Waiting up to 5m0s for pod "pod-0e81915a-8d83-4076-a5d8-ae25c667d2fc" in namespace "emptydir-2907" to be "success or failure"
Feb 18 22:53:49.103: INFO: Pod "pod-0e81915a-8d83-4076-a5d8-ae25c667d2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.084229ms
Feb 18 22:53:51.141: INFO: Pod "pod-0e81915a-8d83-4076-a5d8-ae25c667d2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045307794s
Feb 18 22:53:53.148: INFO: Pod "pod-0e81915a-8d83-4076-a5d8-ae25c667d2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052511683s
Feb 18 22:53:55.155: INFO: Pod "pod-0e81915a-8d83-4076-a5d8-ae25c667d2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05863063s
Feb 18 22:53:57.159: INFO: Pod "pod-0e81915a-8d83-4076-a5d8-ae25c667d2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062917811s
Feb 18 22:53:59.166: INFO: Pod "pod-0e81915a-8d83-4076-a5d8-ae25c667d2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.069794963s
Feb 18 22:54:01.172: INFO: Pod "pod-0e81915a-8d83-4076-a5d8-ae25c667d2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.076445033s
Feb 18 22:54:03.179: INFO: Pod "pod-0e81915a-8d83-4076-a5d8-ae25c667d2fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.083542844s
STEP: Saw pod success
Feb 18 22:54:03.180: INFO: Pod "pod-0e81915a-8d83-4076-a5d8-ae25c667d2fc" satisfied condition "success or failure"
Feb 18 22:54:03.183: INFO: Trying to get logs from node jerma-node pod pod-0e81915a-8d83-4076-a5d8-ae25c667d2fc container test-container: 
STEP: delete the pod
Feb 18 22:54:03.240: INFO: Waiting for pod pod-0e81915a-8d83-4076-a5d8-ae25c667d2fc to disappear
Feb 18 22:54:03.288: INFO: Pod pod-0e81915a-8d83-4076-a5d8-ae25c667d2fc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:54:03.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2907" for this suite.

• [SLOW TEST:14.309 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":4037,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:54:03.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb 18 22:54:03.476: INFO: Waiting up to 5m0s for pod "downward-api-dd360cd1-2f3a-48e0-aafc-70f354225111" in namespace "downward-api-2296" to be "success or failure"
Feb 18 22:54:03.497: INFO: Pod "downward-api-dd360cd1-2f3a-48e0-aafc-70f354225111": Phase="Pending", Reason="", readiness=false. Elapsed: 20.904723ms
Feb 18 22:54:05.510: INFO: Pod "downward-api-dd360cd1-2f3a-48e0-aafc-70f354225111": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033515734s
Feb 18 22:54:07.519: INFO: Pod "downward-api-dd360cd1-2f3a-48e0-aafc-70f354225111": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042799542s
Feb 18 22:54:10.037: INFO: Pod "downward-api-dd360cd1-2f3a-48e0-aafc-70f354225111": Phase="Pending", Reason="", readiness=false. Elapsed: 6.561389917s
Feb 18 22:54:12.046: INFO: Pod "downward-api-dd360cd1-2f3a-48e0-aafc-70f354225111": Phase="Pending", Reason="", readiness=false. Elapsed: 8.570048444s
Feb 18 22:54:14.054: INFO: Pod "downward-api-dd360cd1-2f3a-48e0-aafc-70f354225111": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.577841509s
STEP: Saw pod success
Feb 18 22:54:14.054: INFO: Pod "downward-api-dd360cd1-2f3a-48e0-aafc-70f354225111" satisfied condition "success or failure"
Feb 18 22:54:14.057: INFO: Trying to get logs from node jerma-node pod downward-api-dd360cd1-2f3a-48e0-aafc-70f354225111 container dapi-container: 
STEP: delete the pod
Feb 18 22:54:14.094: INFO: Waiting for pod downward-api-dd360cd1-2f3a-48e0-aafc-70f354225111 to disappear
Feb 18 22:54:14.111: INFO: Pod downward-api-dd360cd1-2f3a-48e0-aafc-70f354225111 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:54:14.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2296" for this suite.

• [SLOW TEST:10.811 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":4042,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:54:14.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 18 22:54:14.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-8217'
Feb 18 22:54:17.371: INFO: stderr: ""
Feb 18 22:54:17.371: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Feb 18 22:54:27.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-8217 -o json'
Feb 18 22:54:27.574: INFO: stderr: ""
Feb 18 22:54:27.574: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-18T22:54:17Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-8217\",\n        \"resourceVersion\": \"9285872\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-8217/pods/e2e-test-httpd-pod\",\n        \"uid\": \"94b9e2e6-a89a-4302-b228-f7a4d568aaeb\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-d6flp\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-d6flp\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-d6flp\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-18T22:54:17Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-18T22:54:23Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-18T22:54:23Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-18T22:54:17Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://599d8aeda1a2450b0929565573cf804cb79ef838efd170020ecaf85bd0381540\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-18T22:54:22Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.2.250\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.44.0.1\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-18T22:54:17Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 18 22:54:27.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8217'
Feb 18 22:54:28.035: INFO: stderr: ""
Feb 18 22:54:28.035: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1882
Feb 18 22:54:28.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8217'
Feb 18 22:54:33.799: INFO: stderr: ""
Feb 18 22:54:33.800: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:54:33.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8217" for this suite.

• [SLOW TEST:19.725 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1873
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":243,"skipped":4059,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:54:33.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:55:06.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4601" for this suite.

• [SLOW TEST:32.184 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":244,"skipped":4067,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:55:06.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-9274
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-9274
STEP: Deleting pre-stop pod
Feb 18 22:55:29.302: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:55:29.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-9274" for this suite.

• [SLOW TEST:23.327 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":245,"skipped":4087,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:55:29.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Feb 18 22:55:29.509: INFO: Created pod &Pod{ObjectMeta:{dns-2738  dns-2738 /api/v1/namespaces/dns-2738/pods/dns-2738 36d0ae34-594f-46be-8c51-63a55345308f 9286165 0 2020-02-18 22:55:29 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n2fwz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n2fwz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n2fwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Feb 18 22:55:39.542: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2738 PodName:dns-2738 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 22:55:39.542: INFO: >>> kubeConfig: /root/.kube/config
I0218 22:55:39.610370       8 log.go:172] (0xc002cf22c0) (0xc0012dc640) Create stream
I0218 22:55:39.610467       8 log.go:172] (0xc002cf22c0) (0xc0012dc640) Stream added, broadcasting: 1
I0218 22:55:39.614740       8 log.go:172] (0xc002cf22c0) Reply frame received for 1
I0218 22:55:39.614797       8 log.go:172] (0xc002cf22c0) (0xc0012dc820) Create stream
I0218 22:55:39.614821       8 log.go:172] (0xc002cf22c0) (0xc0012dc820) Stream added, broadcasting: 3
I0218 22:55:39.616837       8 log.go:172] (0xc002cf22c0) Reply frame received for 3
I0218 22:55:39.616873       8 log.go:172] (0xc002cf22c0) (0xc0012dc8c0) Create stream
I0218 22:55:39.616885       8 log.go:172] (0xc002cf22c0) (0xc0012dc8c0) Stream added, broadcasting: 5
I0218 22:55:39.619514       8 log.go:172] (0xc002cf22c0) Reply frame received for 5
I0218 22:55:39.751857       8 log.go:172] (0xc002cf22c0) Data frame received for 3
I0218 22:55:39.751912       8 log.go:172] (0xc0012dc820) (3) Data frame handling
I0218 22:55:39.751955       8 log.go:172] (0xc0012dc820) (3) Data frame sent
I0218 22:55:39.822613       8 log.go:172] (0xc002cf22c0) Data frame received for 1
I0218 22:55:39.822681       8 log.go:172] (0xc0012dc640) (1) Data frame handling
I0218 22:55:39.822699       8 log.go:172] (0xc0012dc640) (1) Data frame sent
I0218 22:55:39.823359       8 log.go:172] (0xc002cf22c0) (0xc0012dc640) Stream removed, broadcasting: 1
I0218 22:55:39.826658       8 log.go:172] (0xc002cf22c0) (0xc0012dc8c0) Stream removed, broadcasting: 5
I0218 22:55:39.826845       8 log.go:172] (0xc002cf22c0) (0xc0012dc820) Stream removed, broadcasting: 3
I0218 22:55:39.826935       8 log.go:172] (0xc002cf22c0) (0xc0012dc640) Stream removed, broadcasting: 1
I0218 22:55:39.826963       8 log.go:172] (0xc002cf22c0) (0xc0012dc820) Stream removed, broadcasting: 3
I0218 22:55:39.826988       8 log.go:172] (0xc002cf22c0) (0xc0012dc8c0) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Feb 18 22:55:39.827: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2738 PodName:dns-2738 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 22:55:39.827: INFO: >>> kubeConfig: /root/.kube/config
I0218 22:55:39.827730       8 log.go:172] (0xc002cf22c0) Go away received
I0218 22:55:39.880435       8 log.go:172] (0xc002b06790) (0xc001318e60) Create stream
I0218 22:55:39.880692       8 log.go:172] (0xc002b06790) (0xc001318e60) Stream added, broadcasting: 1
I0218 22:55:39.887922       8 log.go:172] (0xc002b06790) Reply frame received for 1
I0218 22:55:39.888042       8 log.go:172] (0xc002b06790) (0xc001cc0c80) Create stream
I0218 22:55:39.888096       8 log.go:172] (0xc002b06790) (0xc001cc0c80) Stream added, broadcasting: 3
I0218 22:55:39.889495       8 log.go:172] (0xc002b06790) Reply frame received for 3
I0218 22:55:39.889532       8 log.go:172] (0xc002b06790) (0xc001319540) Create stream
I0218 22:55:39.889538       8 log.go:172] (0xc002b06790) (0xc001319540) Stream added, broadcasting: 5
I0218 22:55:39.890670       8 log.go:172] (0xc002b06790) Reply frame received for 5
I0218 22:55:39.966623       8 log.go:172] (0xc002b06790) Data frame received for 3
I0218 22:55:39.966701       8 log.go:172] (0xc001cc0c80) (3) Data frame handling
I0218 22:55:39.966738       8 log.go:172] (0xc001cc0c80) (3) Data frame sent
I0218 22:55:40.043357       8 log.go:172] (0xc002b06790) (0xc001cc0c80) Stream removed, broadcasting: 3
I0218 22:55:40.043516       8 log.go:172] (0xc002b06790) (0xc001319540) Stream removed, broadcasting: 5
I0218 22:55:40.043545       8 log.go:172] (0xc002b06790) Data frame received for 1
I0218 22:55:40.043566       8 log.go:172] (0xc001318e60) (1) Data frame handling
I0218 22:55:40.043628       8 log.go:172] (0xc001318e60) (1) Data frame sent
I0218 22:55:40.043641       8 log.go:172] (0xc002b06790) (0xc001318e60) Stream removed, broadcasting: 1
I0218 22:55:40.043655       8 log.go:172] (0xc002b06790) Go away received
I0218 22:55:40.044026       8 log.go:172] (0xc002b06790) (0xc001318e60) Stream removed, broadcasting: 1
I0218 22:55:40.044094       8 log.go:172] (0xc002b06790) (0xc001cc0c80) Stream removed, broadcasting: 3
I0218 22:55:40.044121       8 log.go:172] (0xc002b06790) (0xc001319540) Stream removed, broadcasting: 5
Feb 18 22:55:40.044: INFO: Deleting pod dns-2738...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:55:40.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2738" for this suite.

• [SLOW TEST:10.724 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":246,"skipped":4105,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:55:40.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-d5f668c2-d1b9-482c-8572-bfddf28bdda1
STEP: Creating a pod to test consume secrets
Feb 18 22:55:40.252: INFO: Waiting up to 5m0s for pod "pod-secrets-30bb9d1c-6764-4464-98f0-56898444ba95" in namespace "secrets-6106" to be "success or failure"
Feb 18 22:55:40.293: INFO: Pod "pod-secrets-30bb9d1c-6764-4464-98f0-56898444ba95": Phase="Pending", Reason="", readiness=false. Elapsed: 40.239641ms
Feb 18 22:55:42.300: INFO: Pod "pod-secrets-30bb9d1c-6764-4464-98f0-56898444ba95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046949835s
Feb 18 22:55:44.318: INFO: Pod "pod-secrets-30bb9d1c-6764-4464-98f0-56898444ba95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065064317s
Feb 18 22:55:46.327: INFO: Pod "pod-secrets-30bb9d1c-6764-4464-98f0-56898444ba95": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074710724s
Feb 18 22:55:48.339: INFO: Pod "pod-secrets-30bb9d1c-6764-4464-98f0-56898444ba95": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08609114s
Feb 18 22:55:50.355: INFO: Pod "pod-secrets-30bb9d1c-6764-4464-98f0-56898444ba95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.102263192s
STEP: Saw pod success
Feb 18 22:55:50.355: INFO: Pod "pod-secrets-30bb9d1c-6764-4464-98f0-56898444ba95" satisfied condition "success or failure"
Feb 18 22:55:50.362: INFO: Trying to get logs from node jerma-node pod pod-secrets-30bb9d1c-6764-4464-98f0-56898444ba95 container secret-volume-test: 
STEP: delete the pod
Feb 18 22:55:50.493: INFO: Waiting for pod pod-secrets-30bb9d1c-6764-4464-98f0-56898444ba95 to disappear
Feb 18 22:55:50.514: INFO: Pod pod-secrets-30bb9d1c-6764-4464-98f0-56898444ba95 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:55:50.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6106" for this suite.

• [SLOW TEST:10.495 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4145,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:55:50.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Feb 18 22:55:50.656: INFO: Waiting up to 5m0s for pod "var-expansion-28133e83-d534-4b4d-99fe-36c57e38f129" in namespace "var-expansion-9844" to be "success or failure"
Feb 18 22:55:50.694: INFO: Pod "var-expansion-28133e83-d534-4b4d-99fe-36c57e38f129": Phase="Pending", Reason="", readiness=false. Elapsed: 37.827551ms
Feb 18 22:55:52.702: INFO: Pod "var-expansion-28133e83-d534-4b4d-99fe-36c57e38f129": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044997409s
Feb 18 22:55:54.720: INFO: Pod "var-expansion-28133e83-d534-4b4d-99fe-36c57e38f129": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063490916s
Feb 18 22:55:56.775: INFO: Pod "var-expansion-28133e83-d534-4b4d-99fe-36c57e38f129": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118259438s
Feb 18 22:55:58.784: INFO: Pod "var-expansion-28133e83-d534-4b4d-99fe-36c57e38f129": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12742845s
Feb 18 22:56:00.805: INFO: Pod "var-expansion-28133e83-d534-4b4d-99fe-36c57e38f129": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.148408581s
STEP: Saw pod success
Feb 18 22:56:00.805: INFO: Pod "var-expansion-28133e83-d534-4b4d-99fe-36c57e38f129" satisfied condition "success or failure"
Feb 18 22:56:00.810: INFO: Trying to get logs from node jerma-node pod var-expansion-28133e83-d534-4b4d-99fe-36c57e38f129 container dapi-container: 
STEP: delete the pod
Feb 18 22:56:00.882: INFO: Waiting for pod var-expansion-28133e83-d534-4b4d-99fe-36c57e38f129 to disappear
Feb 18 22:56:00.905: INFO: Pod var-expansion-28133e83-d534-4b4d-99fe-36c57e38f129 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:56:00.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9844" for this suite.

• [SLOW TEST:10.374 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4147,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:56:00.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 22:56:01.161: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:56:07.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2885" for this suite.

• [SLOW TEST:6.467 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":249,"skipped":4149,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:56:07.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 18 22:56:07.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7749'
Feb 18 22:56:07.725: INFO: stderr: ""
Feb 18 22:56:07.725: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846
Feb 18 22:56:07.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7749'
Feb 18 22:56:12.354: INFO: stderr: ""
Feb 18 22:56:12.354: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:56:12.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7749" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":250,"skipped":4166,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:56:12.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 22:56:12.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5556'
Feb 18 22:56:12.968: INFO: stderr: ""
Feb 18 22:56:12.968: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Feb 18 22:56:12.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5556'
Feb 18 22:56:13.495: INFO: stderr: ""
Feb 18 22:56:13.495: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 18 22:56:14.505: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 22:56:14.506: INFO: Found 0 / 1
Feb 18 22:56:15.512: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 22:56:15.512: INFO: Found 0 / 1
Feb 18 22:56:16.509: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 22:56:16.509: INFO: Found 0 / 1
Feb 18 22:56:17.586: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 22:56:17.586: INFO: Found 0 / 1
Feb 18 22:56:18.506: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 22:56:18.506: INFO: Found 0 / 1
Feb 18 22:56:19.528: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 22:56:19.528: INFO: Found 0 / 1
Feb 18 22:56:20.508: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 22:56:20.508: INFO: Found 1 / 1
Feb 18 22:56:20.508: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 18 22:56:20.513: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 18 22:56:20.513: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 18 22:56:20.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-zqpr2 --namespace=kubectl-5556'
Feb 18 22:56:20.648: INFO: stderr: ""
Feb 18 22:56:20.648: INFO: stdout: "Name:         agnhost-master-zqpr2\nNamespace:    kubectl-5556\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Tue, 18 Feb 2020 22:56:13 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nIPs:\n  IP:           10.44.0.1\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://bccdc821c2d855ba4e92d11f875254233b3984e4095e05df77996ceb7aa54e95\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 18 Feb 2020 22:56:19 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bdj76 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-bdj76:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-bdj76\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-5556/agnhost-master-zqpr2 to jerma-node\n  Normal  Pulled     4s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    1s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    1s         kubelet, jerma-node  Started container agnhost-master\n"
Feb 18 22:56:20.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-5556'
Feb 18 22:56:20.802: INFO: stderr: ""
Feb 18 22:56:20.802: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-5556\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: agnhost-master-zqpr2\n"
Feb 18 22:56:20.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-5556'
Feb 18 22:56:20.900: INFO: stderr: ""
Feb 18 22:56:20.900: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-5556\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.5.115\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb 18 22:56:20.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Feb 18 22:56:21.106: INFO: stderr: ""
Feb 18 22:56:21.106: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Tue, 18 Feb 2020 22:56:19 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Tue, 18 Feb 2020 22:52:11 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Tue, 18 Feb 2020 22:52:11 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Tue, 18 Feb 2020 22:52:11 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Tue, 18 Feb 2020 22:52:11 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (3 in total)\n  Namespace                   Name                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                    ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-dsf66        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45d\n  kube-system                 weave-net-kz8lv         20m (0%)      0 (0%)      0 (0%)           0 (0%)         45d\n  kubectl-5556                agnhost-master-zqpr2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb 18 22:56:21.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5556'
Feb 18 22:56:21.254: INFO: stderr: ""
Feb 18 22:56:21.254: INFO: stdout: "Name:         kubectl-5556\nLabels:       e2e-framework=kubectl\n              e2e-run=3cf67b14-2d12-4463-85e3-4375b5ca43cc\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:56:21.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5556" for this suite.

• [SLOW TEST:8.896 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":251,"skipped":4166,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:56:21.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 22:56:21.364: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-317c19e6-0401-4ff0-a055-9a8531ca17e0" in namespace "security-context-test-2470" to be "success or failure"
Feb 18 22:56:21.379: INFO: Pod "busybox-privileged-false-317c19e6-0401-4ff0-a055-9a8531ca17e0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.263908ms
Feb 18 22:56:23.391: INFO: Pod "busybox-privileged-false-317c19e6-0401-4ff0-a055-9a8531ca17e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026579583s
Feb 18 22:56:25.405: INFO: Pod "busybox-privileged-false-317c19e6-0401-4ff0-a055-9a8531ca17e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041072471s
Feb 18 22:56:27.412: INFO: Pod "busybox-privileged-false-317c19e6-0401-4ff0-a055-9a8531ca17e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047767837s
Feb 18 22:56:29.441: INFO: Pod "busybox-privileged-false-317c19e6-0401-4ff0-a055-9a8531ca17e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077082641s
Feb 18 22:56:31.456: INFO: Pod "busybox-privileged-false-317c19e6-0401-4ff0-a055-9a8531ca17e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.091516439s
Feb 18 22:56:31.456: INFO: Pod "busybox-privileged-false-317c19e6-0401-4ff0-a055-9a8531ca17e0" satisfied condition "success or failure"
Feb 18 22:56:31.474: INFO: Got logs for pod "busybox-privileged-false-317c19e6-0401-4ff0-a055-9a8531ca17e0": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:56:31.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2470" for this suite.

• [SLOW TEST:10.229 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4181,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:56:31.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test env composition
Feb 18 22:56:31.604: INFO: Waiting up to 5m0s for pod "var-expansion-bcdf3f52-db16-428f-84eb-d09656f4ebd6" in namespace "var-expansion-4836" to be "success or failure"
Feb 18 22:56:31.611: INFO: Pod "var-expansion-bcdf3f52-db16-428f-84eb-d09656f4ebd6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.85213ms
Feb 18 22:56:33.621: INFO: Pod "var-expansion-bcdf3f52-db16-428f-84eb-d09656f4ebd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015991328s
Feb 18 22:56:35.632: INFO: Pod "var-expansion-bcdf3f52-db16-428f-84eb-d09656f4ebd6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027254851s
Feb 18 22:56:37.639: INFO: Pod "var-expansion-bcdf3f52-db16-428f-84eb-d09656f4ebd6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034798452s
Feb 18 22:56:39.649: INFO: Pod "var-expansion-bcdf3f52-db16-428f-84eb-d09656f4ebd6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04410797s
Feb 18 22:56:41.658: INFO: Pod "var-expansion-bcdf3f52-db16-428f-84eb-d09656f4ebd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053277893s
STEP: Saw pod success
Feb 18 22:56:41.658: INFO: Pod "var-expansion-bcdf3f52-db16-428f-84eb-d09656f4ebd6" satisfied condition "success or failure"
Feb 18 22:56:41.664: INFO: Trying to get logs from node jerma-node pod var-expansion-bcdf3f52-db16-428f-84eb-d09656f4ebd6 container dapi-container: 
STEP: delete the pod
Feb 18 22:56:41.872: INFO: Waiting for pod var-expansion-bcdf3f52-db16-428f-84eb-d09656f4ebd6 to disappear
Feb 18 22:56:41.887: INFO: Pod var-expansion-bcdf3f52-db16-428f-84eb-d09656f4ebd6 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 22:56:41.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4836" for this suite.

• [SLOW TEST:10.412 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4184,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 22:56:41.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Feb 18 22:56:42.071: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 18 22:56:42.110: INFO: Waiting for terminating namespaces to be deleted...
Feb 18 22:56:42.154: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 18 22:56:42.170: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 18 22:56:42.170: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 18 22:56:42.170: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 18 22:56:42.170: INFO: 	Container weave ready: true, restart count 1
Feb 18 22:56:42.170: INFO: 	Container weave-npc ready: true, restart count 0
Feb 18 22:56:42.170: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 18 22:56:42.235: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 18 22:56:42.235: INFO: 	Container coredns ready: true, restart count 0
Feb 18 22:56:42.235: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 18 22:56:42.235: INFO: 	Container coredns ready: true, restart count 0
Feb 18 22:56:42.235: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 18 22:56:42.235: INFO: 	Container kube-controller-manager ready: true, restart count 14
Feb 18 22:56:42.235: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 18 22:56:42.235: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 18 22:56:42.235: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 18 22:56:42.235: INFO: 	Container weave ready: true, restart count 0
Feb 18 22:56:42.235: INFO: 	Container weave-npc ready: true, restart count 0
Feb 18 22:56:42.235: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 18 22:56:42.235: INFO: 	Container kube-scheduler ready: true, restart count 18
Feb 18 22:56:42.235: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 18 22:56:42.235: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 18 22:56:42.235: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 18 22:56:42.235: INFO: 	Container etcd ready: true, restart count 1
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-0df28b67-b842-4738-b3e3-2845a5a7a157 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-0df28b67-b842-4738-b3e3-2845a5a7a157 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-0df28b67-b842-4738-b3e3-2845a5a7a157
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:01:58.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5925" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:316.658 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":254,"skipped":4191,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:01:58.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 18 23:01:58.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2746'
Feb 18 23:01:58.832: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 18 23:01:58.832: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Feb 18 23:01:58.862: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-dds2t]
Feb 18 23:01:58.862: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-dds2t" in namespace "kubectl-2746" to be "running and ready"
Feb 18 23:01:58.865: INFO: Pod "e2e-test-httpd-rc-dds2t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.662502ms
Feb 18 23:02:00.872: INFO: Pod "e2e-test-httpd-rc-dds2t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009528302s
Feb 18 23:02:02.881: INFO: Pod "e2e-test-httpd-rc-dds2t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019177733s
Feb 18 23:02:04.887: INFO: Pod "e2e-test-httpd-rc-dds2t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024779342s
Feb 18 23:02:06.901: INFO: Pod "e2e-test-httpd-rc-dds2t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039155538s
Feb 18 23:02:08.910: INFO: Pod "e2e-test-httpd-rc-dds2t": Phase="Running", Reason="", readiness=true. Elapsed: 10.047633261s
Feb 18 23:02:08.910: INFO: Pod "e2e-test-httpd-rc-dds2t" satisfied condition "running and ready"
Feb 18 23:02:08.910: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-dds2t]
Feb 18 23:02:08.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-2746'
Feb 18 23:02:09.126: INFO: stderr: ""
Feb 18 23:02:09.126: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.2. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.2. Set the 'ServerName' directive globally to suppress this message\n[Tue Feb 18 23:02:05.713646 2020] [mpm_event:notice] [pid 1:tid 140146041015144] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Tue Feb 18 23:02:05.713742 2020] [core:notice] [pid 1:tid 140146041015144] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb 18 23:02:09.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2746'
Feb 18 23:02:09.242: INFO: stderr: ""
Feb 18 23:02:09.242: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:02:09.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2746" for this suite.

• [SLOW TEST:10.691 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1608
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":278,"completed":255,"skipped":4211,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:02:09.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:02:19.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-333" for this suite.

• [SLOW TEST:10.193 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4212,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:02:19.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 18 23:02:19.580: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9dbacc36-f92c-4c4a-8275-c38a27e45d09" in namespace "downward-api-4029" to be "success or failure"
Feb 18 23:02:19.596: INFO: Pod "downwardapi-volume-9dbacc36-f92c-4c4a-8275-c38a27e45d09": Phase="Pending", Reason="", readiness=false. Elapsed: 15.688834ms
Feb 18 23:02:21.604: INFO: Pod "downwardapi-volume-9dbacc36-f92c-4c4a-8275-c38a27e45d09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023962811s
Feb 18 23:02:23.615: INFO: Pod "downwardapi-volume-9dbacc36-f92c-4c4a-8275-c38a27e45d09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03501348s
Feb 18 23:02:25.622: INFO: Pod "downwardapi-volume-9dbacc36-f92c-4c4a-8275-c38a27e45d09": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041837945s
Feb 18 23:02:27.627: INFO: Pod "downwardapi-volume-9dbacc36-f92c-4c4a-8275-c38a27e45d09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047273948s
STEP: Saw pod success
Feb 18 23:02:27.627: INFO: Pod "downwardapi-volume-9dbacc36-f92c-4c4a-8275-c38a27e45d09" satisfied condition "success or failure"
Feb 18 23:02:27.631: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9dbacc36-f92c-4c4a-8275-c38a27e45d09 container client-container: 
STEP: delete the pod
Feb 18 23:02:27.800: INFO: Waiting for pod downwardapi-volume-9dbacc36-f92c-4c4a-8275-c38a27e45d09 to disappear
Feb 18 23:02:27.812: INFO: Pod downwardapi-volume-9dbacc36-f92c-4c4a-8275-c38a27e45d09 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:02:27.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4029" for this suite.

• [SLOW TEST:8.394 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4258,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:02:27.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-3558
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-3558
Feb 18 23:02:28.141: INFO: Found 0 stateful pods, waiting for 1
Feb 18 23:02:38.152: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb 18 23:02:38.247: INFO: Deleting all statefulset in ns statefulset-3558
Feb 18 23:02:38.326: INFO: Scaling statefulset ss to 0
Feb 18 23:02:58.395: INFO: Waiting for statefulset status.replicas updated to 0
Feb 18 23:02:58.413: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:02:58.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3558" for this suite.

• [SLOW TEST:30.611 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":258,"skipped":4262,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:02:58.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-446q
STEP: Creating a pod to test atomic-volume-subpath
Feb 18 23:02:58.590: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-446q" in namespace "subpath-1291" to be "success or failure"
Feb 18 23:02:58.662: INFO: Pod "pod-subpath-test-configmap-446q": Phase="Pending", Reason="", readiness=false. Elapsed: 71.713766ms
Feb 18 23:03:00.670: INFO: Pod "pod-subpath-test-configmap-446q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079763319s
Feb 18 23:03:02.675: INFO: Pod "pod-subpath-test-configmap-446q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08436149s
Feb 18 23:03:04.681: INFO: Pod "pod-subpath-test-configmap-446q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090274014s
Feb 18 23:03:06.686: INFO: Pod "pod-subpath-test-configmap-446q": Phase="Running", Reason="", readiness=true. Elapsed: 8.09581986s
Feb 18 23:03:08.694: INFO: Pod "pod-subpath-test-configmap-446q": Phase="Running", Reason="", readiness=true. Elapsed: 10.103501894s
Feb 18 23:03:10.701: INFO: Pod "pod-subpath-test-configmap-446q": Phase="Running", Reason="", readiness=true. Elapsed: 12.11086047s
Feb 18 23:03:12.711: INFO: Pod "pod-subpath-test-configmap-446q": Phase="Running", Reason="", readiness=true. Elapsed: 14.120761261s
Feb 18 23:03:14.722: INFO: Pod "pod-subpath-test-configmap-446q": Phase="Running", Reason="", readiness=true. Elapsed: 16.131405601s
Feb 18 23:03:16.728: INFO: Pod "pod-subpath-test-configmap-446q": Phase="Running", Reason="", readiness=true. Elapsed: 18.137197182s
Feb 18 23:03:18.733: INFO: Pod "pod-subpath-test-configmap-446q": Phase="Running", Reason="", readiness=true. Elapsed: 20.142602615s
Feb 18 23:03:20.748: INFO: Pod "pod-subpath-test-configmap-446q": Phase="Running", Reason="", readiness=true. Elapsed: 22.157180289s
Feb 18 23:03:22.756: INFO: Pod "pod-subpath-test-configmap-446q": Phase="Running", Reason="", readiness=true. Elapsed: 24.165698087s
Feb 18 23:03:24.763: INFO: Pod "pod-subpath-test-configmap-446q": Phase="Running", Reason="", readiness=true. Elapsed: 26.172126022s
Feb 18 23:03:26.772: INFO: Pod "pod-subpath-test-configmap-446q": Phase="Running", Reason="", readiness=true. Elapsed: 28.181494596s
Feb 18 23:03:28.797: INFO: Pod "pod-subpath-test-configmap-446q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.206679754s
STEP: Saw pod success
Feb 18 23:03:28.797: INFO: Pod "pod-subpath-test-configmap-446q" satisfied condition "success or failure"
Feb 18 23:03:28.805: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-446q container test-container-subpath-configmap-446q: 
STEP: delete the pod
Feb 18 23:03:28.893: INFO: Waiting for pod pod-subpath-test-configmap-446q to disappear
Feb 18 23:03:28.899: INFO: Pod pod-subpath-test-configmap-446q no longer exists
STEP: Deleting pod pod-subpath-test-configmap-446q
Feb 18 23:03:28.899: INFO: Deleting pod "pod-subpath-test-configmap-446q" in namespace "subpath-1291"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:03:28.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1291" for this suite.

• [SLOW TEST:30.459 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":259,"skipped":4270,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:03:28.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 18 23:03:38.182: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:03:38.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7159" for this suite.

• [SLOW TEST:9.333 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":260,"skipped":4290,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:03:38.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Feb 18 23:03:38.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-45'
Feb 18 23:03:38.840: INFO: stderr: ""
Feb 18 23:03:38.840: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 18 23:03:38.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-45'
Feb 18 23:03:39.009: INFO: stderr: ""
Feb 18 23:03:39.009: INFO: stdout: "update-demo-nautilus-558st update-demo-nautilus-pq6dj "
Feb 18 23:03:39.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-558st -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-45'
Feb 18 23:03:39.195: INFO: stderr: ""
Feb 18 23:03:39.195: INFO: stdout: ""
Feb 18 23:03:39.195: INFO: update-demo-nautilus-558st is created but not running
Feb 18 23:03:44.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-45'
Feb 18 23:03:44.740: INFO: stderr: ""
Feb 18 23:03:44.741: INFO: stdout: "update-demo-nautilus-558st update-demo-nautilus-pq6dj "
Feb 18 23:03:44.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-558st -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-45'
Feb 18 23:03:45.660: INFO: stderr: ""
Feb 18 23:03:45.661: INFO: stdout: ""
Feb 18 23:03:45.661: INFO: update-demo-nautilus-558st is created but not running
Feb 18 23:03:50.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-45'
Feb 18 23:03:50.779: INFO: stderr: ""
Feb 18 23:03:50.780: INFO: stdout: "update-demo-nautilus-558st update-demo-nautilus-pq6dj "
Feb 18 23:03:50.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-558st -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-45'
Feb 18 23:03:50.918: INFO: stderr: ""
Feb 18 23:03:50.918: INFO: stdout: "true"
Feb 18 23:03:50.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-558st -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-45'
Feb 18 23:03:51.045: INFO: stderr: ""
Feb 18 23:03:51.046: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 18 23:03:51.046: INFO: validating pod update-demo-nautilus-558st
Feb 18 23:03:51.056: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 18 23:03:51.057: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 18 23:03:51.057: INFO: update-demo-nautilus-558st is verified up and running
Feb 18 23:03:51.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pq6dj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-45'
Feb 18 23:03:51.191: INFO: stderr: ""
Feb 18 23:03:51.191: INFO: stdout: "true"
Feb 18 23:03:51.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pq6dj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-45'
Feb 18 23:03:51.277: INFO: stderr: ""
Feb 18 23:03:51.277: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 18 23:03:51.277: INFO: validating pod update-demo-nautilus-pq6dj
Feb 18 23:03:51.292: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 18 23:03:51.292: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 18 23:03:51.292: INFO: update-demo-nautilus-pq6dj is verified up and running
STEP: scaling down the replication controller
Feb 18 23:03:51.294: INFO: scanned /root for discovery docs: 
Feb 18 23:03:51.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-45'
Feb 18 23:03:52.754: INFO: stderr: ""
Feb 18 23:03:52.755: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 18 23:03:52.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-45'
Feb 18 23:03:55.621: INFO: stderr: ""
Feb 18 23:03:55.622: INFO: stdout: "update-demo-nautilus-558st update-demo-nautilus-pq6dj "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 18 23:04:00.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-45'
Feb 18 23:04:00.766: INFO: stderr: ""
Feb 18 23:04:00.766: INFO: stdout: "update-demo-nautilus-pq6dj "
Feb 18 23:04:00.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pq6dj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-45'
Feb 18 23:04:00.885: INFO: stderr: ""
Feb 18 23:04:00.885: INFO: stdout: "true"
Feb 18 23:04:00.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pq6dj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-45'
Feb 18 23:04:00.984: INFO: stderr: ""
Feb 18 23:04:00.984: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 18 23:04:00.984: INFO: validating pod update-demo-nautilus-pq6dj
Feb 18 23:04:00.990: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 18 23:04:00.990: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 18 23:04:00.990: INFO: update-demo-nautilus-pq6dj is verified up and running
STEP: scaling up the replication controller
Feb 18 23:04:00.992: INFO: scanned /root for discovery docs: 
Feb 18 23:04:00.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-45'
Feb 18 23:04:02.199: INFO: stderr: ""
Feb 18 23:04:02.200: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 18 23:04:02.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-45'
Feb 18 23:04:03.456: INFO: stderr: ""
Feb 18 23:04:03.456: INFO: stdout: "update-demo-nautilus-ncqz9 update-demo-nautilus-pq6dj "
Feb 18 23:04:03.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ncqz9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-45'
Feb 18 23:04:04.178: INFO: stderr: ""
Feb 18 23:04:04.178: INFO: stdout: ""
Feb 18 23:04:04.178: INFO: update-demo-nautilus-ncqz9 is created but not running
Feb 18 23:04:09.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-45'
Feb 18 23:04:09.334: INFO: stderr: ""
Feb 18 23:04:09.334: INFO: stdout: "update-demo-nautilus-ncqz9 update-demo-nautilus-pq6dj "
Feb 18 23:04:09.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ncqz9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-45'
Feb 18 23:04:09.441: INFO: stderr: ""
Feb 18 23:04:09.442: INFO: stdout: ""
Feb 18 23:04:09.442: INFO: update-demo-nautilus-ncqz9 is created but not running
Feb 18 23:04:14.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-45'
Feb 18 23:04:14.795: INFO: stderr: ""
Feb 18 23:04:14.795: INFO: stdout: "update-demo-nautilus-ncqz9 update-demo-nautilus-pq6dj "
Feb 18 23:04:14.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ncqz9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-45'
Feb 18 23:04:14.904: INFO: stderr: ""
Feb 18 23:04:14.904: INFO: stdout: "true"
Feb 18 23:04:14.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ncqz9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-45'
Feb 18 23:04:15.025: INFO: stderr: ""
Feb 18 23:04:15.025: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 18 23:04:15.025: INFO: validating pod update-demo-nautilus-ncqz9
Feb 18 23:04:15.032: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 18 23:04:15.032: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 18 23:04:15.032: INFO: update-demo-nautilus-ncqz9 is verified up and running
Feb 18 23:04:15.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pq6dj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-45'
Feb 18 23:04:15.122: INFO: stderr: ""
Feb 18 23:04:15.122: INFO: stdout: "true"
Feb 18 23:04:15.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pq6dj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-45'
Feb 18 23:04:15.229: INFO: stderr: ""
Feb 18 23:04:15.229: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 18 23:04:15.229: INFO: validating pod update-demo-nautilus-pq6dj
Feb 18 23:04:15.235: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 18 23:04:15.235: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 18 23:04:15.235: INFO: update-demo-nautilus-pq6dj is verified up and running
STEP: using delete to clean up resources
Feb 18 23:04:15.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-45'
Feb 18 23:04:15.350: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 18 23:04:15.351: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 18 23:04:15.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-45'
Feb 18 23:04:15.497: INFO: stderr: "No resources found in kubectl-45 namespace.\n"
Feb 18 23:04:15.498: INFO: stdout: ""
Feb 18 23:04:15.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-45 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 18 23:04:15.648: INFO: stderr: ""
Feb 18 23:04:15.648: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:04:15.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-45" for this suite.

• [SLOW TEST:37.435 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":261,"skipped":4325,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:04:15.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-4322
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-4322
I0218 23:04:17.991954       8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4322, replica count: 2
I0218 23:04:21.042878       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 23:04:24.043301       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 23:04:27.044651       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 23:04:30.045276       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 23:04:33.045659       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 18 23:04:33.045: INFO: Creating new exec pod
Feb 18 23:04:42.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4322 execpodrvnvs -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Feb 18 23:04:45.418: INFO: stderr: "I0218 23:04:45.218049    4473 log.go:172] (0xc000397080) (0xc0006e3f40) Create stream\nI0218 23:04:45.218138    4473 log.go:172] (0xc000397080) (0xc0006e3f40) Stream added, broadcasting: 1\nI0218 23:04:45.227138    4473 log.go:172] (0xc000397080) Reply frame received for 1\nI0218 23:04:45.227394    4473 log.go:172] (0xc000397080) (0xc00066e820) Create stream\nI0218 23:04:45.227466    4473 log.go:172] (0xc000397080) (0xc00066e820) Stream added, broadcasting: 3\nI0218 23:04:45.230178    4473 log.go:172] (0xc000397080) Reply frame received for 3\nI0218 23:04:45.230214    4473 log.go:172] (0xc000397080) (0xc0004f9680) Create stream\nI0218 23:04:45.230228    4473 log.go:172] (0xc000397080) (0xc0004f9680) Stream added, broadcasting: 5\nI0218 23:04:45.232876    4473 log.go:172] (0xc000397080) Reply frame received for 5\nI0218 23:04:45.328735    4473 log.go:172] (0xc000397080) Data frame received for 5\nI0218 23:04:45.328804    4473 log.go:172] (0xc0004f9680) (5) Data frame handling\nI0218 23:04:45.328828    4473 log.go:172] (0xc0004f9680) (5) Data frame sent\nI0218 23:04:45.328838    4473 log.go:172] (0xc000397080) Data frame received for 5\nI0218 23:04:45.328844    4473 log.go:172] (0xc0004f9680) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nI0218 23:04:45.328868    4473 log.go:172] (0xc0004f9680) (5) Data frame sent\nI0218 23:04:45.336341    4473 log.go:172] (0xc000397080) Data frame received for 5\nI0218 23:04:45.336376    4473 log.go:172] (0xc0004f9680) (5) Data frame handling\nI0218 23:04:45.336406    4473 log.go:172] (0xc0004f9680) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0218 23:04:45.405275    4473 log.go:172] (0xc000397080) Data frame received for 1\nI0218 23:04:45.405373    4473 log.go:172] (0xc000397080) (0xc00066e820) Stream removed, broadcasting: 3\nI0218 23:04:45.405419    4473 log.go:172] (0xc0006e3f40) (1) Data frame handling\nI0218 23:04:45.405441    4473 log.go:172] (0xc000397080) (0xc0004f9680) Stream removed, broadcasting: 5\nI0218 23:04:45.405462    4473 log.go:172] (0xc0006e3f40) (1) Data frame sent\nI0218 23:04:45.405474    4473 log.go:172] (0xc000397080) (0xc0006e3f40) Stream removed, broadcasting: 1\nI0218 23:04:45.405487    4473 log.go:172] (0xc000397080) Go away received\nI0218 23:04:45.406484    4473 log.go:172] (0xc000397080) (0xc0006e3f40) Stream removed, broadcasting: 1\nI0218 23:04:45.406505    4473 log.go:172] (0xc000397080) (0xc00066e820) Stream removed, broadcasting: 3\nI0218 23:04:45.406512    4473 log.go:172] (0xc000397080) (0xc0004f9680) Stream removed, broadcasting: 5\n"
Feb 18 23:04:45.418: INFO: stdout: ""
Feb 18 23:04:45.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4322 execpodrvnvs -- /bin/sh -x -c nc -zv -t -w 2 10.96.44.50 80'
Feb 18 23:04:45.752: INFO: stderr: "I0218 23:04:45.568316    4509 log.go:172] (0xc0000f42c0) (0xc0007eda40) Create stream\nI0218 23:04:45.568486    4509 log.go:172] (0xc0000f42c0) (0xc0007eda40) Stream added, broadcasting: 1\nI0218 23:04:45.570723    4509 log.go:172] (0xc0000f42c0) Reply frame received for 1\nI0218 23:04:45.570760    4509 log.go:172] (0xc0000f42c0) (0xc000770000) Create stream\nI0218 23:04:45.570771    4509 log.go:172] (0xc0000f42c0) (0xc000770000) Stream added, broadcasting: 3\nI0218 23:04:45.572027    4509 log.go:172] (0xc0000f42c0) Reply frame received for 3\nI0218 23:04:45.572049    4509 log.go:172] (0xc0000f42c0) (0xc00052b4a0) Create stream\nI0218 23:04:45.572060    4509 log.go:172] (0xc0000f42c0) (0xc00052b4a0) Stream added, broadcasting: 5\nI0218 23:04:45.573298    4509 log.go:172] (0xc0000f42c0) Reply frame received for 5\nI0218 23:04:45.675814    4509 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0218 23:04:45.676086    4509 log.go:172] (0xc00052b4a0) (5) Data frame handling\nI0218 23:04:45.676106    4509 log.go:172] (0xc00052b4a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.44.50 80\nI0218 23:04:45.678886    4509 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0218 23:04:45.679006    4509 log.go:172] (0xc00052b4a0) (5) Data frame handling\nI0218 23:04:45.679058    4509 log.go:172] (0xc00052b4a0) (5) Data frame sent\nConnection to 10.96.44.50 80 port [tcp/http] succeeded!\nI0218 23:04:45.743938    4509 log.go:172] (0xc0000f42c0) Data frame received for 1\nI0218 23:04:45.744037    4509 log.go:172] (0xc0000f42c0) (0xc000770000) Stream removed, broadcasting: 3\nI0218 23:04:45.744066    4509 log.go:172] (0xc0007eda40) (1) Data frame handling\nI0218 23:04:45.744080    4509 log.go:172] (0xc0007eda40) (1) Data frame sent\nI0218 23:04:45.744102    4509 log.go:172] (0xc0000f42c0) (0xc00052b4a0) Stream removed, broadcasting: 5\nI0218 23:04:45.744116    4509 log.go:172] (0xc0000f42c0) (0xc0007eda40) Stream removed, broadcasting: 1\nI0218 23:04:45.744129    4509 log.go:172] (0xc0000f42c0) Go away received\nI0218 23:04:45.744810    4509 log.go:172] (0xc0000f42c0) (0xc0007eda40) Stream removed, broadcasting: 1\nI0218 23:04:45.744834    4509 log.go:172] (0xc0000f42c0) (0xc000770000) Stream removed, broadcasting: 3\nI0218 23:04:45.744848    4509 log.go:172] (0xc0000f42c0) (0xc00052b4a0) Stream removed, broadcasting: 5\n"
Feb 18 23:04:45.752: INFO: stdout: ""
Feb 18 23:04:45.752: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:04:45.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4322" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:30.119 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":262,"skipped":4337,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:04:45.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-5045
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 18 23:04:45.905: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 18 23:05:28.103: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=http&host=10.44.0.3&port=8080&tries=1'] Namespace:pod-network-test-5045 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 23:05:28.103: INFO: >>> kubeConfig: /root/.kube/config
I0218 23:05:28.159258       8 log.go:172] (0xc002b50630) (0xc00112e780) Create stream
I0218 23:05:28.159334       8 log.go:172] (0xc002b50630) (0xc00112e780) Stream added, broadcasting: 1
I0218 23:05:28.164406       8 log.go:172] (0xc002b50630) Reply frame received for 1
I0218 23:05:28.164488       8 log.go:172] (0xc002b50630) (0xc00112e8c0) Create stream
I0218 23:05:28.164511       8 log.go:172] (0xc002b50630) (0xc00112e8c0) Stream added, broadcasting: 3
I0218 23:05:28.166243       8 log.go:172] (0xc002b50630) Reply frame received for 3
I0218 23:05:28.166272       8 log.go:172] (0xc002b50630) (0xc001a58000) Create stream
I0218 23:05:28.166282       8 log.go:172] (0xc002b50630) (0xc001a58000) Stream added, broadcasting: 5
I0218 23:05:28.167787       8 log.go:172] (0xc002b50630) Reply frame received for 5
I0218 23:05:28.281194       8 log.go:172] (0xc002b50630) Data frame received for 3
I0218 23:05:28.281243       8 log.go:172] (0xc00112e8c0) (3) Data frame handling
I0218 23:05:28.281267       8 log.go:172] (0xc00112e8c0) (3) Data frame sent
I0218 23:05:28.368038       8 log.go:172] (0xc002b50630) Data frame received for 1
I0218 23:05:28.368156       8 log.go:172] (0xc002b50630) (0xc00112e8c0) Stream removed, broadcasting: 3
I0218 23:05:28.368196       8 log.go:172] (0xc00112e780) (1) Data frame handling
I0218 23:05:28.368210       8 log.go:172] (0xc00112e780) (1) Data frame sent
I0218 23:05:28.368218       8 log.go:172] (0xc002b50630) (0xc00112e780) Stream removed, broadcasting: 1
I0218 23:05:28.368904       8 log.go:172] (0xc002b50630) (0xc001a58000) Stream removed, broadcasting: 5
I0218 23:05:28.368940       8 log.go:172] (0xc002b50630) (0xc00112e780) Stream removed, broadcasting: 1
I0218 23:05:28.368948       8 log.go:172] (0xc002b50630) (0xc00112e8c0) Stream removed, broadcasting: 3
I0218 23:05:28.368957       8 log.go:172] (0xc002b50630) (0xc001a58000) Stream removed, broadcasting: 5
I0218 23:05:28.369115       8 log.go:172] (0xc002b50630) Go away received
Feb 18 23:05:28.369: INFO: Waiting for responses: map[]
Feb 18 23:05:28.373: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=http&host=10.32.0.5&port=8080&tries=1'] Namespace:pod-network-test-5045 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 23:05:28.373: INFO: >>> kubeConfig: /root/.kube/config
I0218 23:05:28.413172       8 log.go:172] (0xc001df34a0) (0xc001a58500) Create stream
I0218 23:05:28.413285       8 log.go:172] (0xc001df34a0) (0xc001a58500) Stream added, broadcasting: 1
I0218 23:05:28.416430       8 log.go:172] (0xc001df34a0) Reply frame received for 1
I0218 23:05:28.416461       8 log.go:172] (0xc001df34a0) (0xc000421ea0) Create stream
I0218 23:05:28.416470       8 log.go:172] (0xc001df34a0) (0xc000421ea0) Stream added, broadcasting: 3
I0218 23:05:28.418030       8 log.go:172] (0xc001df34a0) Reply frame received for 3
I0218 23:05:28.418050       8 log.go:172] (0xc001df34a0) (0xc000b14e60) Create stream
I0218 23:05:28.418059       8 log.go:172] (0xc001df34a0) (0xc000b14e60) Stream added, broadcasting: 5
I0218 23:05:28.419424       8 log.go:172] (0xc001df34a0) Reply frame received for 5
I0218 23:05:28.530192       8 log.go:172] (0xc001df34a0) Data frame received for 3
I0218 23:05:28.530366       8 log.go:172] (0xc000421ea0) (3) Data frame handling
I0218 23:05:28.530435       8 log.go:172] (0xc000421ea0) (3) Data frame sent
I0218 23:05:28.637932       8 log.go:172] (0xc001df34a0) (0xc000421ea0) Stream removed, broadcasting: 3
I0218 23:05:28.638243       8 log.go:172] (0xc001df34a0) Data frame received for 1
I0218 23:05:28.638260       8 log.go:172] (0xc001a58500) (1) Data frame handling
I0218 23:05:28.638300       8 log.go:172] (0xc001a58500) (1) Data frame sent
I0218 23:05:28.638305       8 log.go:172] (0xc001df34a0) (0xc001a58500) Stream removed, broadcasting: 1
I0218 23:05:28.638936       8 log.go:172] (0xc001df34a0) (0xc000b14e60) Stream removed, broadcasting: 5
I0218 23:05:28.639103       8 log.go:172] (0xc001df34a0) (0xc001a58500) Stream removed, broadcasting: 1
I0218 23:05:28.639124       8 log.go:172] (0xc001df34a0) (0xc000421ea0) Stream removed, broadcasting: 3
I0218 23:05:28.639132       8 log.go:172] (0xc001df34a0) (0xc000b14e60) Stream removed, broadcasting: 5
Feb 18 23:05:28.639: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:05:28.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0218 23:05:28.639869       8 log.go:172] (0xc001df34a0) Go away received
STEP: Destroying namespace "pod-network-test-5045" for this suite.

• [SLOW TEST:42.845 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4376,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:05:28.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-48e90113-82c8-4de4-b6f8-c3d7b0f1b94a
STEP: Creating a pod to test consume configMaps
Feb 18 23:05:28.809: INFO: Waiting up to 5m0s for pod "pod-configmaps-497c780a-c5ef-4fbe-9cf9-ef4a17b506aa" in namespace "configmap-577" to be "success or failure"
Feb 18 23:05:28.816: INFO: Pod "pod-configmaps-497c780a-c5ef-4fbe-9cf9-ef4a17b506aa": Phase="Pending", Reason="", readiness=false. Elapsed: 7.026451ms
Feb 18 23:05:30.826: INFO: Pod "pod-configmaps-497c780a-c5ef-4fbe-9cf9-ef4a17b506aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016100616s
Feb 18 23:05:32.835: INFO: Pod "pod-configmaps-497c780a-c5ef-4fbe-9cf9-ef4a17b506aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025673433s
Feb 18 23:05:35.938: INFO: Pod "pod-configmaps-497c780a-c5ef-4fbe-9cf9-ef4a17b506aa": Phase="Pending", Reason="", readiness=false. Elapsed: 7.128362487s
Feb 18 23:05:37.954: INFO: Pod "pod-configmaps-497c780a-c5ef-4fbe-9cf9-ef4a17b506aa": Phase="Pending", Reason="", readiness=false. Elapsed: 9.144490864s
Feb 18 23:05:39.964: INFO: Pod "pod-configmaps-497c780a-c5ef-4fbe-9cf9-ef4a17b506aa": Phase="Pending", Reason="", readiness=false. Elapsed: 11.154627175s
Feb 18 23:05:41.973: INFO: Pod "pod-configmaps-497c780a-c5ef-4fbe-9cf9-ef4a17b506aa": Phase="Pending", Reason="", readiness=false. Elapsed: 13.163440717s
Feb 18 23:05:43.982: INFO: Pod "pod-configmaps-497c780a-c5ef-4fbe-9cf9-ef4a17b506aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.172239775s
STEP: Saw pod success
Feb 18 23:05:43.982: INFO: Pod "pod-configmaps-497c780a-c5ef-4fbe-9cf9-ef4a17b506aa" satisfied condition "success or failure"
Feb 18 23:05:43.988: INFO: Trying to get logs from node jerma-node pod pod-configmaps-497c780a-c5ef-4fbe-9cf9-ef4a17b506aa container configmap-volume-test: 
STEP: delete the pod
Feb 18 23:05:44.169: INFO: Waiting for pod pod-configmaps-497c780a-c5ef-4fbe-9cf9-ef4a17b506aa to disappear
Feb 18 23:05:44.174: INFO: Pod pod-configmaps-497c780a-c5ef-4fbe-9cf9-ef4a17b506aa no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:05:44.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-577" for this suite.

• [SLOW TEST:15.529 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4379,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:05:44.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Feb 18 23:05:55.110: INFO: Successfully updated pod "labelsupdate2bbb2e06-ca24-490e-a943-e82667ab1e84"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:05:57.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2256" for this suite.

• [SLOW TEST:13.022 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4384,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:05:57.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 18 23:05:57.332: INFO: Waiting up to 5m0s for pod "pod-99754f0f-5080-4d70-8ace-934c95620da8" in namespace "emptydir-2293" to be "success or failure"
Feb 18 23:05:57.358: INFO: Pod "pod-99754f0f-5080-4d70-8ace-934c95620da8": Phase="Pending", Reason="", readiness=false. Elapsed: 26.009656ms
Feb 18 23:05:59.366: INFO: Pod "pod-99754f0f-5080-4d70-8ace-934c95620da8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034437409s
Feb 18 23:06:01.375: INFO: Pod "pod-99754f0f-5080-4d70-8ace-934c95620da8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042652724s
Feb 18 23:06:03.381: INFO: Pod "pod-99754f0f-5080-4d70-8ace-934c95620da8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049283487s
Feb 18 23:06:05.387: INFO: Pod "pod-99754f0f-5080-4d70-8ace-934c95620da8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055387269s
Feb 18 23:06:07.393: INFO: Pod "pod-99754f0f-5080-4d70-8ace-934c95620da8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060718512s
STEP: Saw pod success
Feb 18 23:06:07.393: INFO: Pod "pod-99754f0f-5080-4d70-8ace-934c95620da8" satisfied condition "success or failure"
Feb 18 23:06:07.396: INFO: Trying to get logs from node jerma-node pod pod-99754f0f-5080-4d70-8ace-934c95620da8 container test-container: 
STEP: delete the pod
Feb 18 23:06:07.428: INFO: Waiting for pod pod-99754f0f-5080-4d70-8ace-934c95620da8 to disappear
Feb 18 23:06:07.432: INFO: Pod pod-99754f0f-5080-4d70-8ace-934c95620da8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:06:07.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2293" for this suite.

• [SLOW TEST:10.237 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4390,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:06:07.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 23:06:07.610: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.171427ms)
Feb 18 23:06:07.614: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.987263ms)
Feb 18 23:06:07.617: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.694118ms)
Feb 18 23:06:07.622: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.021696ms)
Feb 18 23:06:07.626: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.03503ms)
Feb 18 23:06:07.629: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.764397ms)
Feb 18 23:06:07.760: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 130.547562ms)
Feb 18 23:06:07.780: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.267226ms)
Feb 18 23:06:07.787: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.134488ms)
Feb 18 23:06:07.793: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.716358ms)
Feb 18 23:06:07.801: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.70983ms)
Feb 18 23:06:07.810: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.981829ms)
Feb 18 23:06:07.818: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.957884ms)
Feb 18 23:06:07.826: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.721137ms)
Feb 18 23:06:07.833: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.36229ms)
Feb 18 23:06:07.847: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.4845ms)
Feb 18 23:06:07.860: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.603894ms)
Feb 18 23:06:07.868: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.784248ms)
Feb 18 23:06:07.874: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.541478ms)
Feb 18 23:06:07.879: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.906185ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:06:07.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1161" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":278,"completed":267,"skipped":4399,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:06:07.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362
STEP: creating the pod
Feb 18 23:06:08.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4323'
Feb 18 23:06:09.436: INFO: stderr: ""
Feb 18 23:06:09.436: INFO: stdout: "pod/pause created\n"
Feb 18 23:06:09.437: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb 18 23:06:09.437: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4323" to be "running and ready"
Feb 18 23:06:09.504: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 66.600038ms
Feb 18 23:06:11.509: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071611567s
Feb 18 23:06:13.537: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100293387s
Feb 18 23:06:15.546: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109246455s
Feb 18 23:06:17.556: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.118737644s
Feb 18 23:06:17.556: INFO: Pod "pause" satisfied condition "running and ready"
Feb 18 23:06:17.556: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Feb 18 23:06:17.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4323'
Feb 18 23:06:17.730: INFO: stderr: ""
Feb 18 23:06:17.730: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb 18 23:06:17.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4323'
Feb 18 23:06:17.856: INFO: stderr: ""
Feb 18 23:06:17.856: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb 18 23:06:17.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4323'
Feb 18 23:06:17.987: INFO: stderr: ""
Feb 18 23:06:17.987: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb 18 23:06:17.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4323'
Feb 18 23:06:18.119: INFO: stderr: ""
Feb 18 23:06:18.120: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1369
STEP: using delete to clean up resources
Feb 18 23:06:18.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4323'
Feb 18 23:06:18.236: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 18 23:06:18.236: INFO: stdout: "pod \"pause\" force deleted\n"
Feb 18 23:06:18.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4323'
Feb 18 23:06:18.418: INFO: stderr: "No resources found in kubectl-4323 namespace.\n"
Feb 18 23:06:18.418: INFO: stdout: ""
Feb 18 23:06:18.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4323 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 18 23:06:18.610: INFO: stderr: ""
Feb 18 23:06:18.611: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:06:18.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4323" for this suite.

• [SLOW TEST:10.730 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1359
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":268,"skipped":4416,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:06:18.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-0d30f629-4324-4d14-8599-803630dd8639 in namespace container-probe-3122
Feb 18 23:06:28.932: INFO: Started pod liveness-0d30f629-4324-4d14-8599-803630dd8639 in namespace container-probe-3122
STEP: checking the pod's current state and verifying that restartCount is present
Feb 18 23:06:28.936: INFO: Initial restart count of pod liveness-0d30f629-4324-4d14-8599-803630dd8639 is 0
Feb 18 23:06:49.021: INFO: Restart count of pod container-probe-3122/liveness-0d30f629-4324-4d14-8599-803630dd8639 is now 1 (20.084780673s elapsed)
Feb 18 23:07:11.102: INFO: Restart count of pod container-probe-3122/liveness-0d30f629-4324-4d14-8599-803630dd8639 is now 2 (42.165884477s elapsed)
Feb 18 23:07:29.173: INFO: Restart count of pod container-probe-3122/liveness-0d30f629-4324-4d14-8599-803630dd8639 is now 3 (1m0.237144838s elapsed)
Feb 18 23:07:51.265: INFO: Restart count of pod container-probe-3122/liveness-0d30f629-4324-4d14-8599-803630dd8639 is now 4 (1m22.32900452s elapsed)
Feb 18 23:08:51.880: INFO: Restart count of pod container-probe-3122/liveness-0d30f629-4324-4d14-8599-803630dd8639 is now 5 (2m22.943875352s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:08:51.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3122" for this suite.

• [SLOW TEST:153.351 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4429,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:08:51.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 23:08:52.626: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 18 23:08:54.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 23:08:56.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 23:08:58.669: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 23:09:00.669: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 23:09:02.667: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717664132, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 23:09:05.705: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:09:05.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8468" for this suite.
STEP: Destroying namespace "webhook-8468-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.113 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":270,"skipped":4430,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:09:06.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 18 23:09:06.179: INFO: Waiting up to 5m0s for pod "pod-ba12c52f-1682-4d7c-a8f6-6ba0c86cf1a9" in namespace "emptydir-305" to be "success or failure"
Feb 18 23:09:06.183: INFO: Pod "pod-ba12c52f-1682-4d7c-a8f6-6ba0c86cf1a9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.536814ms
Feb 18 23:09:08.190: INFO: Pod "pod-ba12c52f-1682-4d7c-a8f6-6ba0c86cf1a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011127226s
Feb 18 23:09:10.200: INFO: Pod "pod-ba12c52f-1682-4d7c-a8f6-6ba0c86cf1a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021136022s
Feb 18 23:09:12.206: INFO: Pod "pod-ba12c52f-1682-4d7c-a8f6-6ba0c86cf1a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027041542s
Feb 18 23:09:14.213: INFO: Pod "pod-ba12c52f-1682-4d7c-a8f6-6ba0c86cf1a9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.033442302s
Feb 18 23:09:16.222: INFO: Pod "pod-ba12c52f-1682-4d7c-a8f6-6ba0c86cf1a9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.042587758s
Feb 18 23:09:18.461: INFO: Pod "pod-ba12c52f-1682-4d7c-a8f6-6ba0c86cf1a9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.281909414s
Feb 18 23:09:20.470: INFO: Pod "pod-ba12c52f-1682-4d7c-a8f6-6ba0c86cf1a9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.290711527s
Feb 18 23:09:22.493: INFO: Pod "pod-ba12c52f-1682-4d7c-a8f6-6ba0c86cf1a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.314064104s
STEP: Saw pod success
Feb 18 23:09:22.493: INFO: Pod "pod-ba12c52f-1682-4d7c-a8f6-6ba0c86cf1a9" satisfied condition "success or failure"
Feb 18 23:09:22.521: INFO: Trying to get logs from node jerma-node pod pod-ba12c52f-1682-4d7c-a8f6-6ba0c86cf1a9 container test-container: 
STEP: delete the pod
Feb 18 23:09:22.851: INFO: Waiting for pod pod-ba12c52f-1682-4d7c-a8f6-6ba0c86cf1a9 to disappear
Feb 18 23:09:22.861: INFO: Pod pod-ba12c52f-1682-4d7c-a8f6-6ba0c86cf1a9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:09:22.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-305" for this suite.

• [SLOW TEST:16.799 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4431,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:09:22.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-0b892ff8-a391-4259-bf3b-754a883a2450
STEP: Creating a pod to test consume configMaps
Feb 18 23:09:23.139: INFO: Waiting up to 5m0s for pod "pod-configmaps-229e3641-aba6-4e5b-8f57-495967bd3757" in namespace "configmap-6052" to be "success or failure"
Feb 18 23:09:23.159: INFO: Pod "pod-configmaps-229e3641-aba6-4e5b-8f57-495967bd3757": Phase="Pending", Reason="", readiness=false. Elapsed: 20.405937ms
Feb 18 23:09:25.165: INFO: Pod "pod-configmaps-229e3641-aba6-4e5b-8f57-495967bd3757": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025788144s
Feb 18 23:09:27.170: INFO: Pod "pod-configmaps-229e3641-aba6-4e5b-8f57-495967bd3757": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031143899s
Feb 18 23:09:29.181: INFO: Pod "pod-configmaps-229e3641-aba6-4e5b-8f57-495967bd3757": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042149253s
Feb 18 23:09:31.187: INFO: Pod "pod-configmaps-229e3641-aba6-4e5b-8f57-495967bd3757": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048364196s
Feb 18 23:09:33.193: INFO: Pod "pod-configmaps-229e3641-aba6-4e5b-8f57-495967bd3757": Phase="Pending", Reason="", readiness=false. Elapsed: 10.054430162s
Feb 18 23:09:35.201: INFO: Pod "pod-configmaps-229e3641-aba6-4e5b-8f57-495967bd3757": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.062061065s
STEP: Saw pod success
Feb 18 23:09:35.201: INFO: Pod "pod-configmaps-229e3641-aba6-4e5b-8f57-495967bd3757" satisfied condition "success or failure"
Feb 18 23:09:35.205: INFO: Trying to get logs from node jerma-node pod pod-configmaps-229e3641-aba6-4e5b-8f57-495967bd3757 container configmap-volume-test: 
STEP: delete the pod
Feb 18 23:09:35.278: INFO: Waiting for pod pod-configmaps-229e3641-aba6-4e5b-8f57-495967bd3757 to disappear
Feb 18 23:09:35.286: INFO: Pod pod-configmaps-229e3641-aba6-4e5b-8f57-495967bd3757 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:09:35.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6052" for this suite.

• [SLOW TEST:12.415 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4437,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:09:35.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 23:09:35.432: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb 18 23:09:37.062: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:09:37.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6673" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":273,"skipped":4440,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:09:37.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 18 23:09:37.878: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:09:39.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2402" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":274,"skipped":4445,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 18 23:09:39.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-ssckc in namespace proxy-3360
I0218 23:09:39.663270       8 runners.go:189] Created replication controller with name: proxy-service-ssckc, namespace: proxy-3360, replica count: 1
I0218 23:09:40.714855       8 runners.go:189] proxy-service-ssckc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 23:09:41.716283       8 runners.go:189] proxy-service-ssckc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 23:09:42.716930       8 runners.go:189] proxy-service-ssckc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 23:09:43.717307       8 runners.go:189] proxy-service-ssckc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 23:09:44.717695       8 runners.go:189] proxy-service-ssckc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 23:09:45.718087       8 runners.go:189] proxy-service-ssckc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 23:09:46.718458       8 runners.go:189] proxy-service-ssckc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 23:09:47.719447       8 runners.go:189] proxy-service-ssckc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 23:09:48.719934       8 runners.go:189] proxy-service-ssckc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 23:09:49.720464       8 runners.go:189] proxy-service-ssckc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 23:09:50.721363       8 runners.go:189] proxy-service-ssckc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 23:09:51.721895       8 runners.go:189] proxy-service-ssckc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 23:09:52.722329       8 runners.go:189] proxy-service-ssckc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 23:09:53.723086       8 runners.go:189] proxy-service-ssckc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 23:09:54.724393       8 runners.go:189] proxy-service-ssckc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 23:09:55.725828       8 runners.go:189] proxy-service-ssckc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0218 23:09:56.726606       8 runners.go:189] proxy-service-ssckc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 18 23:09:56.733: INFO: setup took 17.128497916s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 18 23:09:56.751: INFO: (0) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname1/proxy/: foo (200; 17.737221ms)
Feb 18 23:09:56.751: INFO: (0) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname2/proxy/: bar (200; 18.309607ms)
Feb 18 23:09:56.751: INFO: (0) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 18.364386ms)
Feb 18 23:09:56.751: INFO: (0) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname2/proxy/: bar (200; 18.614691ms)
Feb 18 23:09:56.751: INFO: (0) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:1080/proxy/: ... (200; 18.624494ms)
Feb 18 23:09:56.752: INFO: (0) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 18.79756ms)
Feb 18 23:09:56.752: INFO: (0) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:1080/proxy/: test<... (200; 18.799458ms)
Feb 18 23:09:56.752: INFO: (0) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp/proxy/: test (200; 18.671658ms)
Feb 18 23:09:56.752: INFO: (0) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 18.784365ms)
Feb 18 23:09:56.752: INFO: (0) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 18.945771ms)
Feb 18 23:09:56.756: INFO: (0) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname1/proxy/: foo (200; 22.803167ms)
Feb 18 23:09:56.759: INFO: (0) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname1/proxy/: tls baz (200; 25.871359ms)
Feb 18 23:09:56.759: INFO: (0) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:460/proxy/: tls baz (200; 26.549814ms)
Feb 18 23:09:56.760: INFO: (0) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:443/proxy/: ... (200; 14.878494ms)
Feb 18 23:09:56.776: INFO: (1) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname1/proxy/: tls baz (200; 15.023219ms)
Feb 18 23:09:56.776: INFO: (1) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:460/proxy/: tls baz (200; 15.085272ms)
Feb 18 23:09:56.777: INFO: (1) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp/proxy/: test (200; 16.504869ms)
Feb 18 23:09:56.777: INFO: (1) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname2/proxy/: bar (200; 16.774935ms)
Feb 18 23:09:56.778: INFO: (1) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 16.904286ms)
Feb 18 23:09:56.778: INFO: (1) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:1080/proxy/: test<... (200; 16.883651ms)
Feb 18 23:09:56.789: INFO: (2) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 11.032749ms)
Feb 18 23:09:56.789: INFO: (2) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:460/proxy/: tls baz (200; 11.045596ms)
Feb 18 23:09:56.790: INFO: (2) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp/proxy/: test (200; 12.273395ms)
Feb 18 23:09:56.792: INFO: (2) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname1/proxy/: foo (200; 13.941253ms)
Feb 18 23:09:56.793: INFO: (2) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname2/proxy/: tls qux (200; 15.544253ms)
Feb 18 23:09:56.795: INFO: (2) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname2/proxy/: bar (200; 17.622826ms)
Feb 18 23:09:56.796: INFO: (2) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 17.771239ms)
Feb 18 23:09:56.796: INFO: (2) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname2/proxy/: bar (200; 17.902149ms)
Feb 18 23:09:56.796: INFO: (2) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:1080/proxy/: ... (200; 17.832171ms)
Feb 18 23:09:56.796: INFO: (2) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname1/proxy/: tls baz (200; 17.945756ms)
Feb 18 23:09:56.799: INFO: (2) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 21.16225ms)
Feb 18 23:09:56.800: INFO: (2) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:462/proxy/: tls qux (200; 21.733422ms)
Feb 18 23:09:56.800: INFO: (2) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:1080/proxy/: test<... (200; 21.860625ms)
Feb 18 23:09:56.800: INFO: (2) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname1/proxy/: foo (200; 21.96242ms)
Feb 18 23:09:56.800: INFO: (2) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:443/proxy/: test<... (200; 31.749613ms)
Feb 18 23:09:56.833: INFO: (3) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 32.640227ms)
Feb 18 23:09:56.834: INFO: (3) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 33.061416ms)
Feb 18 23:09:56.834: INFO: (3) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:443/proxy/: ... (200; 33.907494ms)
Feb 18 23:09:56.835: INFO: (3) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 34.290233ms)
Feb 18 23:09:56.835: INFO: (3) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname2/proxy/: bar (200; 34.116247ms)
Feb 18 23:09:56.835: INFO: (3) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname1/proxy/: foo (200; 34.609216ms)
Feb 18 23:09:56.836: INFO: (3) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp/proxy/: test (200; 35.22053ms)
Feb 18 23:09:56.841: INFO: (4) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 4.959194ms)
Feb 18 23:09:56.843: INFO: (4) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:460/proxy/: tls baz (200; 7.257929ms)
Feb 18 23:09:56.845: INFO: (4) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:443/proxy/: test (200; 16.484184ms)
Feb 18 23:09:56.853: INFO: (4) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:462/proxy/: tls qux (200; 16.101992ms)
Feb 18 23:09:56.853: INFO: (4) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname2/proxy/: tls qux (200; 16.162715ms)
Feb 18 23:09:56.853: INFO: (4) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:1080/proxy/: test<... (200; 16.509486ms)
Feb 18 23:09:56.853: INFO: (4) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname1/proxy/: tls baz (200; 16.086658ms)
Feb 18 23:09:56.853: INFO: (4) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:1080/proxy/: ... (200; 16.438977ms)
Feb 18 23:09:56.853: INFO: (4) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 16.460613ms)
Feb 18 23:09:56.854: INFO: (4) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 16.960114ms)
Feb 18 23:09:56.854: INFO: (4) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname2/proxy/: bar (200; 17.49534ms)
Feb 18 23:09:56.854: INFO: (4) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname1/proxy/: foo (200; 17.489848ms)
Feb 18 23:09:56.864: INFO: (5) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 9.820823ms)
Feb 18 23:09:56.866: INFO: (5) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname2/proxy/: bar (200; 11.402745ms)
Feb 18 23:09:56.866: INFO: (5) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname1/proxy/: foo (200; 11.581257ms)
Feb 18 23:09:56.866: INFO: (5) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:460/proxy/: tls baz (200; 11.552727ms)
Feb 18 23:09:56.867: INFO: (5) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:1080/proxy/: test<... (200; 12.724775ms)
Feb 18 23:09:56.867: INFO: (5) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname1/proxy/: foo (200; 13.175358ms)
Feb 18 23:09:56.868: INFO: (5) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 13.339064ms)
Feb 18 23:09:56.868: INFO: (5) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp/proxy/: test (200; 13.757627ms)
Feb 18 23:09:56.868: INFO: (5) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 14.049417ms)
Feb 18 23:09:56.868: INFO: (5) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname2/proxy/: bar (200; 13.932528ms)
Feb 18 23:09:56.868: INFO: (5) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:462/proxy/: tls qux (200; 13.906089ms)
Feb 18 23:09:56.869: INFO: (5) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:443/proxy/: ... (200; 14.670813ms)
Feb 18 23:09:56.869: INFO: (5) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname2/proxy/: tls qux (200; 14.951397ms)
Feb 18 23:09:56.871: INFO: (5) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname1/proxy/: tls baz (200; 16.507038ms)
Feb 18 23:09:56.878: INFO: (6) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:1080/proxy/: ... (200; 6.905375ms)
Feb 18 23:09:56.878: INFO: (6) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:1080/proxy/: test<... (200; 7.157252ms)
Feb 18 23:09:56.879: INFO: (6) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 7.420335ms)
Feb 18 23:09:56.879: INFO: (6) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 7.770976ms)
Feb 18 23:09:56.879: INFO: (6) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:462/proxy/: tls qux (200; 8.072883ms)
Feb 18 23:09:56.882: INFO: (6) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname2/proxy/: bar (200; 11.322553ms)
Feb 18 23:09:56.883: INFO: (6) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname1/proxy/: tls baz (200; 11.621903ms)
Feb 18 23:09:56.883: INFO: (6) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname2/proxy/: bar (200; 11.658778ms)
Feb 18 23:09:56.883: INFO: (6) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname1/proxy/: foo (200; 11.66799ms)
Feb 18 23:09:56.883: INFO: (6) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname1/proxy/: foo (200; 12.060691ms)
Feb 18 23:09:56.884: INFO: (6) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname2/proxy/: tls qux (200; 12.707574ms)
Feb 18 23:09:56.885: INFO: (6) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 13.871286ms)
Feb 18 23:09:56.885: INFO: (6) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 14.299805ms)
Feb 18 23:09:56.886: INFO: (6) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:443/proxy/: test (200; 14.750073ms)
Feb 18 23:09:56.894: INFO: (7) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:460/proxy/: tls baz (200; 8.481857ms)
Feb 18 23:09:56.895: INFO: (7) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 8.487928ms)
Feb 18 23:09:56.902: INFO: (7) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 15.379599ms)
Feb 18 23:09:56.902: INFO: (7) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname2/proxy/: tls qux (200; 16.179217ms)
Feb 18 23:09:56.904: INFO: (7) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname1/proxy/: foo (200; 17.490443ms)
Feb 18 23:09:56.904: INFO: (7) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:1080/proxy/: test<... (200; 17.345978ms)
Feb 18 23:09:56.904: INFO: (7) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname1/proxy/: tls baz (200; 17.526454ms)
Feb 18 23:09:56.906: INFO: (7) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname1/proxy/: foo (200; 19.189227ms)
Feb 18 23:09:56.906: INFO: (7) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 19.814059ms)
Feb 18 23:09:56.907: INFO: (7) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:1080/proxy/: ... (200; 20.507314ms)
Feb 18 23:09:56.908: INFO: (7) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp/proxy/: test (200; 21.089716ms)
Feb 18 23:09:56.908: INFO: (7) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname2/proxy/: bar (200; 21.471791ms)
Feb 18 23:09:56.908: INFO: (7) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 21.454342ms)
Feb 18 23:09:56.908: INFO: (7) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname2/proxy/: bar (200; 21.404498ms)
Feb 18 23:09:56.908: INFO: (7) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:443/proxy/: ... (200; 9.514815ms)
Feb 18 23:09:56.918: INFO: (8) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:460/proxy/: tls baz (200; 9.61384ms)
Feb 18 23:09:56.918: INFO: (8) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 9.833431ms)
Feb 18 23:09:56.918: INFO: (8) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp/proxy/: test (200; 9.879819ms)
Feb 18 23:09:56.918: INFO: (8) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:1080/proxy/: test<... (200; 10.094929ms)
Feb 18 23:09:56.921: INFO: (8) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname1/proxy/: foo (200; 12.45606ms)
Feb 18 23:09:56.923: INFO: (8) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname2/proxy/: bar (200; 14.647626ms)
Feb 18 23:09:56.923: INFO: (8) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname2/proxy/: bar (200; 14.795521ms)
Feb 18 23:09:56.923: INFO: (8) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname1/proxy/: tls baz (200; 14.842614ms)
Feb 18 23:09:56.924: INFO: (8) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname1/proxy/: foo (200; 15.577103ms)
Feb 18 23:09:56.926: INFO: (8) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname2/proxy/: tls qux (200; 17.026396ms)
Feb 18 23:09:56.939: INFO: (9) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:460/proxy/: tls baz (200; 13.117058ms)
Feb 18 23:09:56.939: INFO: (9) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:1080/proxy/: test<... (200; 13.47286ms)
Feb 18 23:09:56.940: INFO: (9) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp/proxy/: test (200; 13.530056ms)
Feb 18 23:09:56.940: INFO: (9) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:462/proxy/: tls qux (200; 13.886272ms)
Feb 18 23:09:56.940: INFO: (9) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:1080/proxy/: ... (200; 14.009643ms)
Feb 18 23:09:56.940: INFO: (9) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 13.785803ms)
Feb 18 23:09:56.940: INFO: (9) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 14.408579ms)
Feb 18 23:09:56.941: INFO: (9) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:443/proxy/: test (200; 10.180928ms)
Feb 18 23:09:56.956: INFO: (10) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 11.164764ms)
Feb 18 23:09:56.957: INFO: (10) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname1/proxy/: tls baz (200; 11.222042ms)
Feb 18 23:09:56.957: INFO: (10) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:462/proxy/: tls qux (200; 11.686348ms)
Feb 18 23:09:56.957: INFO: (10) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:443/proxy/: ... (200; 13.520985ms)
Feb 18 23:09:56.959: INFO: (10) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname2/proxy/: tls qux (200; 13.861683ms)
Feb 18 23:09:56.959: INFO: (10) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:1080/proxy/: test<... (200; 13.984986ms)
Feb 18 23:09:56.959: INFO: (10) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 14.179661ms)
Feb 18 23:09:56.961: INFO: (10) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname2/proxy/: bar (200; 16.13146ms)
Feb 18 23:09:56.976: INFO: (11) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 14.816551ms)
Feb 18 23:09:56.976: INFO: (11) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:460/proxy/: tls baz (200; 14.931155ms)
Feb 18 23:09:56.976: INFO: (11) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 14.82516ms)
Feb 18 23:09:56.976: INFO: (11) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 14.817401ms)
Feb 18 23:09:56.976: INFO: (11) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp/proxy/: test (200; 14.8896ms)
Feb 18 23:09:56.977: INFO: (11) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 15.427947ms)
Feb 18 23:09:56.977: INFO: (11) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:1080/proxy/: ... (200; 15.314846ms)
Feb 18 23:09:56.978: INFO: (11) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:443/proxy/: test<... (200; 16.564351ms)
Feb 18 23:09:56.980: INFO: (11) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname1/proxy/: foo (200; 18.511561ms)
Feb 18 23:09:56.981: INFO: (11) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname2/proxy/: bar (200; 19.213678ms)
Feb 18 23:09:56.981: INFO: (11) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname1/proxy/: tls baz (200; 19.632115ms)
Feb 18 23:09:56.983: INFO: (11) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname1/proxy/: foo (200; 21.511274ms)
Feb 18 23:09:56.983: INFO: (11) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname2/proxy/: tls qux (200; 21.501954ms)
Feb 18 23:09:56.983: INFO: (11) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname2/proxy/: bar (200; 21.546622ms)
Feb 18 23:09:56.990: INFO: (12) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 6.596742ms)
Feb 18 23:09:56.990: INFO: (12) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp/proxy/: test (200; 6.861179ms)
Feb 18 23:09:56.990: INFO: (12) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:443/proxy/: ... (200; 8.316018ms)
Feb 18 23:09:56.992: INFO: (12) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 8.60283ms)
Feb 18 23:09:56.992: INFO: (12) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:1080/proxy/: test<... (200; 8.54621ms)
Feb 18 23:09:56.993: INFO: (12) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname1/proxy/: foo (200; 10.132518ms)
Feb 18 23:09:56.994: INFO: (12) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname2/proxy/: tls qux (200; 10.637235ms)
Feb 18 23:09:56.994: INFO: (12) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname1/proxy/: foo (200; 10.852098ms)
Feb 18 23:09:56.994: INFO: (12) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname2/proxy/: bar (200; 11.22369ms)
Feb 18 23:09:56.994: INFO: (12) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname2/proxy/: bar (200; 11.165229ms)
Feb 18 23:09:56.994: INFO: (12) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname1/proxy/: tls baz (200; 11.142804ms)
Feb 18 23:09:56.998: INFO: (13) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:460/proxy/: tls baz (200; 3.56911ms)
Feb 18 23:09:57.004: INFO: (13) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:1080/proxy/: ... (200; 9.869325ms)
Feb 18 23:09:57.005: INFO: (13) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 9.927073ms)
Feb 18 23:09:57.005: INFO: (13) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 10.195979ms)
Feb 18 23:09:57.006: INFO: (13) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 11.288877ms)
Feb 18 23:09:57.006: INFO: (13) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:1080/proxy/: test<... (200; 11.371577ms)
Feb 18 23:09:57.007: INFO: (13) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname2/proxy/: bar (200; 12.059319ms)
Feb 18 23:09:57.008: INFO: (13) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 13.013375ms)
Feb 18 23:09:57.008: INFO: (13) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname1/proxy/: tls baz (200; 13.10883ms)
Feb 18 23:09:57.008: INFO: (13) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:443/proxy/: test (200; 13.438538ms)
Feb 18 23:09:57.008: INFO: (13) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname1/proxy/: foo (200; 13.614703ms)
Feb 18 23:09:57.008: INFO: (13) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname1/proxy/: foo (200; 13.571334ms)
Feb 18 23:09:57.015: INFO: (14) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 6.691047ms)
Feb 18 23:09:57.016: INFO: (14) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:462/proxy/: tls qux (200; 6.881572ms)
Feb 18 23:09:57.018: INFO: (14) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:460/proxy/: tls baz (200; 9.421058ms)
Feb 18 23:09:57.018: INFO: (14) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:1080/proxy/: test<... (200; 7.99695ms)
Feb 18 23:09:57.018: INFO: (14) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:443/proxy/: ... (200; 8.411682ms)
Feb 18 23:09:57.026: INFO: (14) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp/proxy/: test (200; 16.03429ms)
Feb 18 23:09:57.026: INFO: (14) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname2/proxy/: tls qux (200; 17.331676ms)
Feb 18 23:09:57.026: INFO: (14) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 16.216755ms)
Feb 18 23:09:57.026: INFO: (14) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname1/proxy/: foo (200; 17.236385ms)
Feb 18 23:09:57.027: INFO: (14) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 16.568334ms)
Feb 18 23:09:57.027: INFO: (14) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname1/proxy/: foo (200; 16.833337ms)
Feb 18 23:09:57.029: INFO: (14) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname2/proxy/: bar (200; 20.153001ms)
Feb 18 23:09:57.029: INFO: (14) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname2/proxy/: bar (200; 19.366804ms)
Feb 18 23:09:57.029: INFO: (14) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname1/proxy/: tls baz (200; 19.856277ms)
Feb 18 23:09:57.056: INFO: (15) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 26.511467ms)
Feb 18 23:09:57.057: INFO: (15) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:1080/proxy/: test<... (200; 26.410783ms)
Feb 18 23:09:57.057: INFO: (15) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 27.377274ms)
Feb 18 23:09:57.057: INFO: (15) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 26.463652ms)
Feb 18 23:09:57.057: INFO: (15) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp/proxy/: test (200; 27.164747ms)
Feb 18 23:09:57.057: INFO: (15) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 26.974773ms)
Feb 18 23:09:57.057: INFO: (15) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:1080/proxy/: ... (200; 27.037995ms)
Feb 18 23:09:57.058: INFO: (15) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:460/proxy/: tls baz (200; 28.231132ms)
Feb 18 23:09:57.058: INFO: (15) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:462/proxy/: tls qux (200; 27.612827ms)
Feb 18 23:09:57.058: INFO: (15) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:443/proxy/: ... (200; 10.31198ms)
Feb 18 23:09:57.076: INFO: (16) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 11.131315ms)
Feb 18 23:09:57.076: INFO: (16) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:462/proxy/: tls qux (200; 11.289314ms)
Feb 18 23:09:57.076: INFO: (16) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:460/proxy/: tls baz (200; 11.948667ms)
Feb 18 23:09:57.076: INFO: (16) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:443/proxy/: test (200; 12.062803ms)
Feb 18 23:09:57.082: INFO: (16) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname2/proxy/: bar (200; 17.319328ms)
Feb 18 23:09:57.082: INFO: (16) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname2/proxy/: tls qux (200; 17.660208ms)
Feb 18 23:09:57.082: INFO: (16) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname2/proxy/: bar (200; 17.681297ms)
Feb 18 23:09:57.082: INFO: (16) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname1/proxy/: tls baz (200; 17.965194ms)
Feb 18 23:09:57.082: INFO: (16) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:1080/proxy/: test<... (200; 17.679943ms)
Feb 18 23:09:57.083: INFO: (16) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 18.093559ms)
Feb 18 23:09:57.083: INFO: (16) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname1/proxy/: foo (200; 18.195804ms)
Feb 18 23:09:57.083: INFO: (16) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname1/proxy/: foo (200; 18.55375ms)
Feb 18 23:09:57.089: INFO: (17) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp/proxy/: test (200; 5.998255ms)
Feb 18 23:09:57.089: INFO: (17) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 6.037372ms)
Feb 18 23:09:57.089: INFO: (17) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:462/proxy/: tls qux (200; 6.010944ms)
Feb 18 23:09:57.089: INFO: (17) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 6.087659ms)
Feb 18 23:09:57.089: INFO: (17) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 6.191785ms)
Feb 18 23:09:57.089: INFO: (17) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:460/proxy/: tls baz (200; 6.22312ms)
Feb 18 23:09:57.090: INFO: (17) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 6.719172ms)
Feb 18 23:09:57.090: INFO: (17) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:443/proxy/: ... (200; 6.860955ms)
Feb 18 23:09:57.090: INFO: (17) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:1080/proxy/: test<... (200; 7.087683ms)
Feb 18 23:09:57.092: INFO: (17) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname1/proxy/: foo (200; 9.23304ms)
Feb 18 23:09:57.093: INFO: (17) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname1/proxy/: tls baz (200; 9.201119ms)
Feb 18 23:09:57.093: INFO: (17) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname1/proxy/: foo (200; 9.570296ms)
Feb 18 23:09:57.093: INFO: (17) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname2/proxy/: bar (200; 9.954614ms)
Feb 18 23:09:57.093: INFO: (17) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname2/proxy/: tls qux (200; 10.007826ms)
Feb 18 23:09:57.093: INFO: (17) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname2/proxy/: bar (200; 10.098291ms)
Feb 18 23:09:57.096: INFO: (18) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 2.476359ms)
Feb 18 23:09:57.096: INFO: (18) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:1080/proxy/: ... (200; 2.946669ms)
Feb 18 23:09:57.097: INFO: (18) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:1080/proxy/: test<... (200; 3.289794ms)
Feb 18 23:09:57.098: INFO: (18) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 4.317708ms)
Feb 18 23:09:57.102: INFO: (18) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:462/proxy/: tls qux (200; 8.185153ms)
Feb 18 23:09:57.102: INFO: (18) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:443/proxy/: test (200; 8.886359ms)
Feb 18 23:09:57.102: INFO: (18) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 8.883448ms)
Feb 18 23:09:57.104: INFO: (18) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 10.680152ms)
Feb 18 23:09:57.105: INFO: (18) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:460/proxy/: tls baz (200; 11.041329ms)
Feb 18 23:09:57.107: INFO: (18) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname1/proxy/: foo (200; 13.424958ms)
Feb 18 23:09:57.107: INFO: (18) /api/v1/namespaces/proxy-3360/services/proxy-service-ssckc:portname2/proxy/: bar (200; 13.553847ms)
Feb 18 23:09:57.107: INFO: (18) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname1/proxy/: tls baz (200; 13.723142ms)
Feb 18 23:09:57.107: INFO: (18) /api/v1/namespaces/proxy-3360/services/https:proxy-service-ssckc:tlsportname2/proxy/: tls qux (200; 13.838913ms)
Feb 18 23:09:57.107: INFO: (18) /api/v1/namespaces/proxy-3360/services/http:proxy-service-ssckc:portname1/proxy/: foo (200; 13.92531ms)
Feb 18 23:09:57.114: INFO: (19) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:1080/proxy/: test<... (200; 6.714351ms)
Feb 18 23:09:57.115: INFO: (19) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp/proxy/: test (200; 6.938908ms)
Feb 18 23:09:57.115: INFO: (19) /api/v1/namespaces/proxy-3360/pods/proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 7.199761ms)
Feb 18 23:09:57.115: INFO: (19) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:1080/proxy/: ... (200; 7.162569ms)
Feb 18 23:09:57.115: INFO: (19) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:460/proxy/: tls baz (200; 7.456896ms)
Feb 18 23:09:57.115: INFO: (19) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:162/proxy/: bar (200; 7.390187ms)
Feb 18 23:09:57.116: INFO: (19) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:462/proxy/: tls qux (200; 8.402453ms)
Feb 18 23:09:57.116: INFO: (19) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-ssckc-vtvwp:160/proxy/: foo (200; 8.461027ms)
Feb 18 23:09:57.116: INFO: (19) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-ssckc-vtvwp:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 18 23:10:20.615: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 18 23:10:21.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4796" for this suite.

• [SLOW TEST:9.713 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4524,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSFeb 18 23:10:22.100: INFO: Running AfterSuite actions on all nodes
Feb 18 23:10:22.100: INFO: Running AfterSuite actions on node 1
Feb 18 23:10:22.100: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":276,"skipped":4536,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}


Summarizing 2 Failures:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:762

[Fail] [sig-cli] Kubectl client Guestbook application [It] should create and stop a working application  [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315

Ran 278 of 4814 Specs in 7177.635 seconds
FAIL! -- 276 Passed | 2 Failed | 0 Pending | 4536 Skipped
--- FAIL: TestE2E (7177.73s)
FAIL