I0704 08:10:29.157663 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0704 08:10:29.157920 6 e2e.go:109] Starting e2e run "495c0ca3-30ba-4919-ac44-c0ef702cd874" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1593850228 - Will randomize all specs Will run 278 of 4843 specs Jul 4 08:10:29.211: INFO: >>> kubeConfig: /root/.kube/config Jul 4 08:10:29.215: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 4 08:10:29.242: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 4 08:10:29.271: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 4 08:10:29.271: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 4 08:10:29.271: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 4 08:10:29.283: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 4 08:10:29.283: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 4 08:10:29.283: INFO: e2e test version: v1.17.8 Jul 4 08:10:29.285: INFO: kube-apiserver version: v1.17.5 Jul 4 08:10:29.285: INFO: >>> kubeConfig: /root/.kube/config Jul 4 08:10:29.289: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:10:29.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns Jul 4 08:10:29.386: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3609 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3609;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3609 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3609;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3609.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3609.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3609.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3609.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3609.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3609.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3609.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3609.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3609.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3609.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3609.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3609.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3609.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 56.235.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.235.56_udp@PTR;check="$$(dig +tcp +noall +answer +search 56.235.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.235.56_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3609 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3609;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3609 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3609;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3609.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3609.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3609.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3609.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3609.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3609.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3609.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3609.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3609.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3609.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3609.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3609.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3609.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 56.235.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.235.56_udp@PTR;check="$$(dig +tcp +noall +answer +search 56.235.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.235.56_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 4 08:10:53.477: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.480: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.483: INFO: Unable to read wheezy_udp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.486: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.489: INFO: Unable to read wheezy_udp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.492: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.498: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.510: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.531: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.534: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.537: INFO: Unable to read jessie_udp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.540: INFO: Unable to read jessie_tcp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.543: INFO: Unable to read jessie_udp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.546: INFO: Unable to read jessie_tcp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.550: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.553: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:53.574: INFO: Lookups using dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3609 wheezy_tcp@dns-test-service.dns-3609 wheezy_udp@dns-test-service.dns-3609.svc wheezy_tcp@dns-test-service.dns-3609.svc wheezy_udp@_http._tcp.dns-test-service.dns-3609.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3609.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3609 jessie_tcp@dns-test-service.dns-3609 jessie_udp@dns-test-service.dns-3609.svc jessie_tcp@dns-test-service.dns-3609.svc jessie_udp@_http._tcp.dns-test-service.dns-3609.svc jessie_tcp@_http._tcp.dns-test-service.dns-3609.svc] Jul 4 08:10:58.579: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.582: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.585: INFO: Unable to read wheezy_udp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.587: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.590: INFO: Unable to read wheezy_udp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.593: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.595: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.598: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.620: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.623: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.626: INFO: Unable to read jessie_udp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.629: INFO: Unable to read jessie_tcp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.637: INFO: Unable to read jessie_udp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.640: INFO: Unable to read jessie_tcp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.643: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.645: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:10:58.658: INFO: Lookups using dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3609 wheezy_tcp@dns-test-service.dns-3609 wheezy_udp@dns-test-service.dns-3609.svc wheezy_tcp@dns-test-service.dns-3609.svc wheezy_udp@_http._tcp.dns-test-service.dns-3609.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3609.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3609 jessie_tcp@dns-test-service.dns-3609 jessie_udp@dns-test-service.dns-3609.svc jessie_tcp@dns-test-service.dns-3609.svc jessie_udp@_http._tcp.dns-test-service.dns-3609.svc jessie_tcp@_http._tcp.dns-test-service.dns-3609.svc] Jul 4 08:11:03.588: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.599: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.604: INFO: Unable to read wheezy_udp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.606: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.608: INFO: Unable to read wheezy_udp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.611: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.613: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.615: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.634: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.636: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.639: INFO: Unable to read jessie_udp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.642: INFO: Unable to read jessie_tcp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.644: INFO: Unable to read jessie_udp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.647: INFO: Unable to read jessie_tcp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.648: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.651: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:03.665: INFO: Lookups using dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3609 wheezy_tcp@dns-test-service.dns-3609 wheezy_udp@dns-test-service.dns-3609.svc wheezy_tcp@dns-test-service.dns-3609.svc wheezy_udp@_http._tcp.dns-test-service.dns-3609.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3609.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3609 jessie_tcp@dns-test-service.dns-3609 jessie_udp@dns-test-service.dns-3609.svc jessie_tcp@dns-test-service.dns-3609.svc jessie_udp@_http._tcp.dns-test-service.dns-3609.svc jessie_tcp@_http._tcp.dns-test-service.dns-3609.svc] Jul 4 08:11:08.578: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.581: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.592: INFO: Unable to read wheezy_udp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.598: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.600: INFO: Unable to read wheezy_udp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.602: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.605: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.607: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.624: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.626: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.628: INFO: Unable to read jessie_udp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.630: INFO: Unable to read jessie_tcp@dns-test-service.dns-3609 from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.632: INFO: Unable to read jessie_udp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.635: INFO: Unable to read jessie_tcp@dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.637: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.640: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3609.svc from pod dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c: the server could not find the requested resource (get pods dns-test-c67d171f-934f-4d34-807c-1432c038b09c) Jul 4 08:11:08.678: INFO: Lookups using dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3609 wheezy_tcp@dns-test-service.dns-3609 wheezy_udp@dns-test-service.dns-3609.svc wheezy_tcp@dns-test-service.dns-3609.svc wheezy_udp@_http._tcp.dns-test-service.dns-3609.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3609.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3609 jessie_tcp@dns-test-service.dns-3609 jessie_udp@dns-test-service.dns-3609.svc jessie_tcp@dns-test-service.dns-3609.svc jessie_udp@_http._tcp.dns-test-service.dns-3609.svc jessie_tcp@_http._tcp.dns-test-service.dns-3609.svc] Jul 4 08:11:13.670: INFO: DNS probes using dns-3609/dns-test-c67d171f-934f-4d34-807c-1432c038b09c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:11:14.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3609" for this suite. • [SLOW TEST:45.591 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":1,"skipped":6,"failed":0} SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:11:14.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 4 08:11:16.601: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
alternatives.log containers/ alternatives.log containers/ alternatives.log containers/ alternatives.log containers/ alternatives.log containers/ alternatives.log containers/ alternatives.log containers/ alternatives.log containers/ alternatives.log containers/ alternatives.log containers/ alternatives.log containers/ alternatives.log containers/ alternatives.log containers/ alternatives.log containers/ alternatives.log containers/ alternatives.log containers/ alternatives.log containers/ alternatives.log containers/ alternatives.log containers/ alternatives.log containers/ >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:11:29.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9323" for this suite. • [SLOW TEST:12.669 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":76,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:11:29.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jul 4 08:11:29.500: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 4 08:11:29.522: INFO: Waiting for terminating namespaces to be deleted... Jul 4 08:11:29.525: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jul 4 08:11:29.546: INFO: kube-proxy-8sp85 from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded) Jul 4 08:11:29.546: INFO: Container kube-proxy ready: true, restart count 0 Jul 4 08:11:29.546: INFO: kindnet-gnxwn from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded) Jul 4 08:11:29.546: INFO: Container kindnet-cni ready: true, restart count 0 Jul 4 08:11:29.546: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jul 4 08:11:29.575: INFO: kube-proxy-b2ncl from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded) Jul 4 08:11:29.575: INFO: Container kube-proxy ready: true, restart count 0 Jul 4 08:11:29.575: INFO: kindnet-qg8qr from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded) Jul 4 08:11:29.575: INFO: Container kindnet-cni ready: true, restart count 0 Jul 4 08:11:29.575: INFO: bin-falseb99ac0ab-6742-4cb8-93c1-49fb79ae1762 from kubelet-test-9323 started at 2020-07-04 08:11:17 +0000 UTC (1 container statuses recorded) Jul 4 08:11:29.575: INFO: Container bin-falseb99ac0ab-6742-4cb8-93c1-49fb79ae1762 ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Jul 4 08:11:29.628: INFO: Pod kindnet-gnxwn requesting resource cpu=100m on Node jerma-worker Jul 4 08:11:29.628: INFO: Pod kindnet-qg8qr requesting resource cpu=100m on Node jerma-worker2 Jul 4 08:11:29.628: INFO: Pod kube-proxy-8sp85 requesting resource cpu=0m on Node jerma-worker Jul 4 08:11:29.628: INFO: Pod kube-proxy-b2ncl requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Jul 4 08:11:29.628: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Jul 4 08:11:29.634: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-74571712-cb17-42a9-a258-dfb8b7661e7c.161e7e64d34569cf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1201/filler-pod-74571712-cb17-42a9-a258-dfb8b7661e7c to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-74571712-cb17-42a9-a258-dfb8b7661e7c.161e7e651ef929e4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-74571712-cb17-42a9-a258-dfb8b7661e7c.161e7e656e4bd128], Reason = [Created], Message = [Created container filler-pod-74571712-cb17-42a9-a258-dfb8b7661e7c] STEP: Considering event: Type = [Normal], Name = [filler-pod-74571712-cb17-42a9-a258-dfb8b7661e7c.161e7e659029cb25], Reason = [Started], Message = [Started container filler-pod-74571712-cb17-42a9-a258-dfb8b7661e7c] STEP: Considering event: Type = [Normal], Name = [filler-pod-898a089e-1ae9-4b4e-a63b-ad4766b40f66.161e7e64d4e30f17], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1201/filler-pod-898a089e-1ae9-4b4e-a63b-ad4766b40f66 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-898a089e-1ae9-4b4e-a63b-ad4766b40f66.161e7e656f816570], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-898a089e-1ae9-4b4e-a63b-ad4766b40f66.161e7e65dd0bb98c], Reason = [Created], Message = [Created container filler-pod-898a089e-1ae9-4b4e-a63b-ad4766b40f66] STEP: Considering event: Type = [Normal], Name = [filler-pod-898a089e-1ae9-4b4e-a63b-ad4766b40f66.161e7e65edddf061], Reason = [Started], Message = [Started container filler-pod-898a089e-1ae9-4b4e-a63b-ad4766b40f66] STEP: Considering event: Type = [Warning], Name = [additional-pod.161e7e663bdc8f73], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:11:36.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1201" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:7.292 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":4,"skipped":84,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:11:36.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-f60580e3-50be-4faf-99cc-0266a23b2ba9 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-f60580e3-50be-4faf-99cc-0266a23b2ba9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:13:01.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7582" for this suite. • [SLOW TEST:84.718 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":88,"failed":0} S ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:13:01.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jul 4 08:13:01.586: INFO: Waiting up to 5m0s for pod "downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542" in namespace "downward-api-460" to be "success or failure" Jul 4 08:13:01.603: INFO: Pod "downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542": Phase="Pending", Reason="", readiness=false. Elapsed: 17.137328ms Jul 4 08:13:03.608: INFO: Pod "downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022057229s Jul 4 08:13:05.612: INFO: Pod "downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026410235s Jul 4 08:13:07.679: INFO: Pod "downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093490964s Jul 4 08:13:09.683: INFO: Pod "downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.097049672s STEP: Saw pod success Jul 4 08:13:09.683: INFO: Pod "downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542" satisfied condition "success or failure" Jul 4 08:13:09.685: INFO: Trying to get logs from node jerma-worker pod downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542 container dapi-container:STEP: delete the pod Jul 4 08:13:09.725: INFO: Waiting for pod downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542 to disappear Jul 4 08:13:09.746: INFO: Pod downward-api-bc36041b-4b18-4e02-9e6e-8aeee8720542 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:13:09.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-460" for this suite. • [SLOW TEST:8.394 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":89,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:13:09.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Jul 4 08:13:10.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-651 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jul 4 08:13:17.471: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0704 08:13:17.421516 30 log.go:172] (0xc000b344d0) (0xc000b18140) Create stream\nI0704 08:13:17.421581 30 log.go:172] (0xc000b344d0) (0xc000b18140) Stream added, broadcasting: 1\nI0704 08:13:17.423739 30 log.go:172] (0xc000b344d0) Reply frame received for 1\nI0704 08:13:17.423766 30 log.go:172] (0xc000b344d0) (0xc00032f360) Create stream\nI0704 08:13:17.423774 30 log.go:172] (0xc000b344d0) (0xc00032f360) Stream added, broadcasting: 3\nI0704 08:13:17.424539 30 log.go:172] (0xc000b344d0) Reply frame received for 3\nI0704 08:13:17.424572 30 log.go:172] (0xc000b344d0) (0xc00032f400) Create stream\nI0704 08:13:17.424582 30 log.go:172] (0xc000b344d0) (0xc00032f400) Stream added, broadcasting: 5\nI0704 08:13:17.425541 30 log.go:172] (0xc000b344d0) Reply frame received for 5\nI0704 08:13:17.425568 30 log.go:172] (0xc000b344d0) (0xc000b181e0) Create stream\nI0704 08:13:17.425579 30 log.go:172] (0xc000b344d0) (0xc000b181e0) Stream added, broadcasting: 7\nI0704 08:13:17.426388 30 log.go:172] (0xc000b344d0) Reply frame received for 7\nI0704 08:13:17.426499 30 log.go:172] (0xc00032f360) (3) Writing data frame\nI0704 08:13:17.426562 30 log.go:172] (0xc00032f360) (3) Writing data frame\nI0704 08:13:17.427266 30 log.go:172] (0xc000b344d0) Data frame received for 5\nI0704 08:13:17.427278 30 log.go:172] (0xc00032f400) (5) Data frame handling\nI0704 08:13:17.427290 30 log.go:172] (0xc00032f400) (5) Data frame sent\nI0704 08:13:17.427803 30 log.go:172] (0xc000b344d0) Data frame received for 5\nI0704 08:13:17.427817 30 log.go:172] (0xc00032f400) (5) Data frame handling\nI0704 08:13:17.427829 30 log.go:172] (0xc00032f400) (5) Data frame sent\nI0704 08:13:17.450575 30 log.go:172] (0xc000b344d0) Data frame received for 5\nI0704 08:13:17.450600 30 log.go:172] (0xc00032f400) (5) Data frame handling\nI0704 08:13:17.450635 30 log.go:172] (0xc000b344d0) Data frame received for 7\nI0704 08:13:17.450666 30 log.go:172] (0xc000b181e0) (7) Data frame handling\nI0704 08:13:17.451117 30 log.go:172] (0xc000b344d0) (0xc00032f360) Stream removed, broadcasting: 3\nI0704 08:13:17.451158 30 log.go:172] (0xc000b344d0) Data frame received for 1\nI0704 08:13:17.451174 30 log.go:172] (0xc000b18140) (1) Data frame handling\nI0704 08:13:17.451197 30 log.go:172] (0xc000b18140) (1) Data frame sent\nI0704 08:13:17.451219 30 log.go:172] (0xc000b344d0) (0xc000b18140) Stream removed, broadcasting: 1\nI0704 08:13:17.451236 30 log.go:172] (0xc000b344d0) Go away received\nI0704 08:13:17.451645 30 log.go:172] (0xc000b344d0) (0xc000b18140) Stream removed, broadcasting: 1\nI0704 08:13:17.451692 30 log.go:172] (0xc000b344d0) (0xc00032f360) Stream removed, broadcasting: 3\nI0704 08:13:17.451709 30 log.go:172] (0xc000b344d0) (0xc00032f400) Stream removed, broadcasting: 5\nI0704 08:13:17.451727 30 log.go:172] (0xc000b344d0) (0xc000b181e0) Stream removed, broadcasting: 7\n" Jul 4 08:13:17.471: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:13:19.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-651" for this suite. • [SLOW TEST:9.628 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":7,"skipped":101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:13:19.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 4 08:13:19.608: INFO: Waiting up to 5m0s for pod "pod-2c1f5c99-5397-4677-86e0-298e776beda3" in namespace "emptydir-3002" to be "success or failure" Jul 4 08:13:19.612: INFO: Pod "pod-2c1f5c99-5397-4677-86e0-298e776beda3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.547542ms Jul 4 08:13:21.616: INFO: Pod "pod-2c1f5c99-5397-4677-86e0-298e776beda3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007652943s Jul 4 08:13:23.626: INFO: Pod "pod-2c1f5c99-5397-4677-86e0-298e776beda3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017661272s Jul 4 08:13:25.630: INFO: Pod "pod-2c1f5c99-5397-4677-86e0-298e776beda3": Phase="Running", Reason="", readiness=true. Elapsed: 6.021297098s Jul 4 08:13:27.638: INFO: Pod "pod-2c1f5c99-5397-4677-86e0-298e776beda3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.029306454s STEP: Saw pod success Jul 4 08:13:27.638: INFO: Pod "pod-2c1f5c99-5397-4677-86e0-298e776beda3" satisfied condition "success or failure" Jul 4 08:13:27.640: INFO: Trying to get logs from node jerma-worker pod pod-2c1f5c99-5397-4677-86e0-298e776beda3 container test-container: STEP: delete the pod Jul 4 08:13:27.658: INFO: Waiting for pod pod-2c1f5c99-5397-4677-86e0-298e776beda3 to disappear Jul 4 08:13:27.662: INFO: Pod pod-2c1f5c99-5397-4677-86e0-298e776beda3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:13:27.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3002" for this suite. • [SLOW TEST:8.185 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:13:27.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Jul 4 08:13:27.718: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:13:27.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8290" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":9,"skipped":215,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:13:27.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-da12604f-a114-4519-981e-8b576fb52e44 STEP: Creating a pod to test consume secrets Jul 4 08:13:27.904: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-14d653b0-a649-44d3-8c5f-92b176390ba2" in namespace "projected-2153" to be "success or failure" Jul 4 08:13:27.921: INFO: Pod "pod-projected-secrets-14d653b0-a649-44d3-8c5f-92b176390ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.524247ms Jul 4 08:13:29.925: INFO: Pod "pod-projected-secrets-14d653b0-a649-44d3-8c5f-92b176390ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021873782s Jul 4 08:13:31.929: INFO: Pod "pod-projected-secrets-14d653b0-a649-44d3-8c5f-92b176390ba2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025295216s STEP: Saw pod success Jul 4 08:13:31.929: INFO: Pod "pod-projected-secrets-14d653b0-a649-44d3-8c5f-92b176390ba2" satisfied condition "success or failure" Jul 4 08:13:31.931: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-14d653b0-a649-44d3-8c5f-92b176390ba2 container projected-secret-volume-test: STEP: delete the pod Jul 4 08:13:31.980: INFO: Waiting for pod pod-projected-secrets-14d653b0-a649-44d3-8c5f-92b176390ba2 to disappear Jul 4 08:13:31.998: INFO: Pod pod-projected-secrets-14d653b0-a649-44d3-8c5f-92b176390ba2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:13:31.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2153" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":226,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:13:32.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 4 08:13:43.158: INFO: Successfully updated pod "pod-update-0a8862f3-7548-4c4d-bb25-8352b9aa7a8c" STEP: verifying the updated pod is in kubernetes Jul 4 08:13:43.171: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:13:43.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2786" for this suite. • [SLOW TEST:11.173 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":231,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:13:43.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 4 08:13:43.744: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 4 08:13:45.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:13:47.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:13:50.433: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:13:51.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:13:53.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447223, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 4 08:13:56.806: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jul 4 08:13:56.831: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:13:56.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7217" for this suite. STEP: Destroying namespace "webhook-7217-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.750 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":12,"skipped":234,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:13:56.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jul 4 08:14:01.578: INFO: Successfully updated pod "annotationupdate80fb40e9-0e0c-451d-8103-a0fc359c10a8" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:14:05.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6855" for this suite. • [SLOW TEST:8.695 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":243,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:14:05.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 4 08:14:05.686: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jul 4 08:14:08.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7592 create -f -' Jul 4 08:14:13.482: INFO: stderr: "" Jul 4 08:14:13.482: INFO: stdout: "e2e-test-crd-publish-openapi-4104-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jul 4 08:14:13.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7592 delete e2e-test-crd-publish-openapi-4104-crds test-cr' Jul 4 08:14:13.691: INFO: stderr: "" Jul 4 08:14:13.691: INFO: stdout: "e2e-test-crd-publish-openapi-4104-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jul 4 08:14:13.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7592 apply -f -' Jul 4 08:14:14.563: INFO: stderr: "" Jul 4 08:14:14.563: INFO: stdout: "e2e-test-crd-publish-openapi-4104-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jul 4 08:14:14.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7592 delete e2e-test-crd-publish-openapi-4104-crds test-cr' Jul 4 08:14:14.770: INFO: stderr: "" Jul 4 08:14:14.770: INFO: stdout: "e2e-test-crd-publish-openapi-4104-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jul 4 08:14:14.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4104-crds' Jul 4 08:14:15.135: INFO: stderr: "" Jul 4 08:14:15.135: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4104-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:14:17.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7592" for this suite. • [SLOW TEST:12.379 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":14,"skipped":287,"failed":0} S ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:14:18.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:14:34.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5386" for this suite. • [SLOW TEST:16.163 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":15,"skipped":288,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:14:34.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 4 08:14:34.246: INFO: Waiting up to 5m0s for pod "pod-1b664330-e28a-4fdd-8240-d5c06addba6e" in namespace "emptydir-46" to be "success or failure" Jul 4 08:14:34.250: INFO: Pod "pod-1b664330-e28a-4fdd-8240-d5c06addba6e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.013909ms Jul 4 08:14:36.255: INFO: Pod "pod-1b664330-e28a-4fdd-8240-d5c06addba6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008086848s Jul 4 08:14:38.259: INFO: Pod "pod-1b664330-e28a-4fdd-8240-d5c06addba6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012075492s STEP: Saw pod success Jul 4 08:14:38.259: INFO: Pod "pod-1b664330-e28a-4fdd-8240-d5c06addba6e" satisfied condition "success or failure" Jul 4 08:14:38.262: INFO: Trying to get logs from node jerma-worker2 pod pod-1b664330-e28a-4fdd-8240-d5c06addba6e container test-container: STEP: delete the pod Jul 4 08:14:38.276: INFO: Waiting for pod pod-1b664330-e28a-4fdd-8240-d5c06addba6e to disappear Jul 4 08:14:38.345: INFO: Pod pod-1b664330-e28a-4fdd-8240-d5c06addba6e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:14:38.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-46" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":295,"failed":0} SS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:14:38.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-8423/configmap-test-b362fe9d-10ba-4f17-a831-238c1a556af9 STEP: Creating a pod to test consume configMaps Jul 4 08:14:38.741: INFO: Waiting up to 5m0s for pod "pod-configmaps-1fe1117d-c1b8-4cf2-943e-269697511a78" in namespace "configmap-8423" to be "success or failure" Jul 4 08:14:38.749: INFO: Pod "pod-configmaps-1fe1117d-c1b8-4cf2-943e-269697511a78": Phase="Pending", Reason="", readiness=false. Elapsed: 7.583066ms Jul 4 08:14:40.753: INFO: Pod "pod-configmaps-1fe1117d-c1b8-4cf2-943e-269697511a78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011815825s Jul 4 08:14:42.757: INFO: Pod "pod-configmaps-1fe1117d-c1b8-4cf2-943e-269697511a78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015933496s STEP: Saw pod success Jul 4 08:14:42.757: INFO: Pod "pod-configmaps-1fe1117d-c1b8-4cf2-943e-269697511a78" satisfied condition "success or failure" Jul 4 08:14:42.760: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-1fe1117d-c1b8-4cf2-943e-269697511a78 container env-test: STEP: delete the pod Jul 4 08:14:42.886: INFO: Waiting for pod pod-configmaps-1fe1117d-c1b8-4cf2-943e-269697511a78 to disappear Jul 4 08:14:42.892: INFO: Pod pod-configmaps-1fe1117d-c1b8-4cf2-943e-269697511a78 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:14:42.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8423" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":297,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:14:42.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9705 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 4 08:14:42.955: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 4 08:15:13.128: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.8:8080/dial?request=hostname&protocol=http&host=10.244.1.7&port=8080&tries=1'] Namespace:pod-network-test-9705 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 4 08:15:13.128: INFO: >>> kubeConfig: /root/.kube/config I0704 08:15:13.165444 6 log.go:172] (0xc004d6c4d0) (0xc0016ca960) Create stream I0704 08:15:13.165486 6 log.go:172] (0xc004d6c4d0) (0xc0016ca960) Stream added, broadcasting: 1 I0704 08:15:13.167539 6 log.go:172] (0xc004d6c4d0) Reply frame received for 1 I0704 08:15:13.167585 6 log.go:172] (0xc004d6c4d0) (0xc001d8e000) Create stream I0704 08:15:13.167601 6 log.go:172] (0xc004d6c4d0) (0xc001d8e000) Stream added, broadcasting: 3 I0704 08:15:13.168629 6 log.go:172] (0xc004d6c4d0) Reply frame received for 3 I0704 08:15:13.168657 6 log.go:172] (0xc004d6c4d0) (0xc00199d360) Create stream I0704 08:15:13.168668 6 log.go:172] (0xc004d6c4d0) (0xc00199d360) Stream added, broadcasting: 5 I0704 08:15:13.169813 6 log.go:172] (0xc004d6c4d0) Reply frame received for 5 I0704 08:15:13.246894 6 log.go:172] (0xc004d6c4d0) Data frame received for 3 I0704 08:15:13.246924 6 log.go:172] (0xc001d8e000) (3) Data frame handling I0704 08:15:13.246943 6 log.go:172] (0xc001d8e000) (3) Data frame sent I0704 08:15:13.247822 6 log.go:172] (0xc004d6c4d0) Data frame received for 3 I0704 08:15:13.247854 6 log.go:172] (0xc001d8e000) (3) Data frame handling I0704 08:15:13.247869 6 log.go:172] (0xc004d6c4d0) Data frame received for 5 I0704 08:15:13.247877 6 log.go:172] (0xc00199d360) (5) Data frame handling I0704 08:15:13.249632 6 log.go:172] (0xc004d6c4d0) Data frame received for 1 I0704 08:15:13.249661 6 log.go:172] (0xc0016ca960) (1) Data frame handling I0704 08:15:13.249684 6 log.go:172] (0xc0016ca960) (1) Data frame sent I0704 08:15:13.249700 6 log.go:172] (0xc004d6c4d0) (0xc0016ca960) Stream removed, broadcasting: 1 I0704 08:15:13.249805 6 log.go:172] (0xc004d6c4d0) Go away received I0704 08:15:13.250191 6 log.go:172] (0xc004d6c4d0) (0xc0016ca960) Stream removed, broadcasting: 1 I0704 08:15:13.250213 6 log.go:172] (0xc004d6c4d0) (0xc001d8e000) Stream removed, broadcasting: 3 I0704 08:15:13.250226 6 log.go:172] (0xc004d6c4d0) (0xc00199d360) Stream removed, broadcasting: 5 Jul 4 08:15:13.250: INFO: Waiting for responses: map[] Jul 4 08:15:13.255: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.8:8080/dial?request=hostname&protocol=http&host=10.244.2.17&port=8080&tries=1'] Namespace:pod-network-test-9705 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 4 08:15:13.255: INFO: >>> kubeConfig: /root/.kube/config I0704 08:15:13.289605 6 log.go:172] (0xc0051f06e0) (0xc001756d20) Create stream I0704 08:15:13.289659 6 log.go:172] (0xc0051f06e0) (0xc001756d20) Stream added, broadcasting: 1 I0704 08:15:13.291308 6 log.go:172] (0xc0051f06e0) Reply frame received for 1 I0704 08:15:13.291359 6 log.go:172] (0xc0051f06e0) (0xc001a09360) Create stream I0704 08:15:13.291378 6 log.go:172] (0xc0051f06e0) (0xc001a09360) Stream added, broadcasting: 3 I0704 08:15:13.292329 6 log.go:172] (0xc0051f06e0) Reply frame received for 3 I0704 08:15:13.292381 6 log.go:172] (0xc0051f06e0) (0xc00199d680) Create stream I0704 08:15:13.292398 6 log.go:172] (0xc0051f06e0) (0xc00199d680) Stream added, broadcasting: 5 I0704 08:15:13.293848 6 log.go:172] (0xc0051f06e0) Reply frame received for 5 I0704 08:15:13.376695 6 log.go:172] (0xc0051f06e0) Data frame received for 3 I0704 08:15:13.376716 6 log.go:172] (0xc001a09360) (3) Data frame handling I0704 08:15:13.376728 6 log.go:172] (0xc001a09360) (3) Data frame sent I0704 08:15:13.377291 6 log.go:172] (0xc0051f06e0) Data frame received for 3 I0704 08:15:13.377314 6 log.go:172] (0xc001a09360) (3) Data frame handling I0704 08:15:13.377772 6 log.go:172] (0xc0051f06e0) Data frame received for 5 I0704 08:15:13.377793 6 log.go:172] (0xc00199d680) (5) Data frame handling I0704 08:15:13.379018 6 log.go:172] (0xc0051f06e0) Data frame received for 1 I0704 08:15:13.379046 6 log.go:172] (0xc001756d20) (1) Data frame handling I0704 08:15:13.379053 6 log.go:172] (0xc001756d20) (1) Data frame sent I0704 08:15:13.379064 6 log.go:172] (0xc0051f06e0) (0xc001756d20) Stream removed, broadcasting: 1 I0704 08:15:13.379073 6 log.go:172] (0xc0051f06e0) Go away received I0704 08:15:13.379178 6 log.go:172] (0xc0051f06e0) (0xc001756d20) Stream removed, broadcasting: 1 I0704 08:15:13.379202 6 log.go:172] (0xc0051f06e0) (0xc001a09360) Stream removed, broadcasting: 3 I0704 08:15:13.379216 6 log.go:172] (0xc0051f06e0) (0xc00199d680) Stream removed, broadcasting: 5 Jul 4 08:15:13.379: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:15:13.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9705" for this suite. • [SLOW TEST:30.488 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":313,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:15:13.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Jul 4 08:15:13.448: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:15:26.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7348" for this suite. • [SLOW TEST:13.292 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":19,"skipped":389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:15:26.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 4 08:15:26.759: INFO: Creating deployment "test-recreate-deployment" Jul 4 08:15:26.775: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jul 4 08:15:26.788: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jul 4 08:15:28.796: INFO: Waiting deployment "test-recreate-deployment" to complete Jul 4 08:15:28.798: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447326, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447326, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447326, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447326, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:15:30.802: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jul 4 08:15:30.809: INFO: Updating deployment test-recreate-deployment Jul 4 08:15:30.809: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jul 4 08:15:31.486: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-8705 /apis/apps/v1/namespaces/deployment-8705/deployments/test-recreate-deployment 8d8745f8-17a5-46c4-b3f7-0c7cd8ecf693 5503 2 2020-07-04 08:15:26 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003eb8938 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-04 08:15:31 +0000 UTC,LastTransitionTime:2020-07-04 08:15:31 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-07-04 08:15:31 +0000 UTC,LastTransitionTime:2020-07-04 08:15:26 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jul 4 08:15:31.489: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-8705 /apis/apps/v1/namespaces/deployment-8705/replicasets/test-recreate-deployment-5f94c574ff 4eafb608-0fea-43e2-9eb1-aa5e00e0e53c 5501 1 2020-07-04 08:15:30 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 8d8745f8-17a5-46c4-b3f7-0c7cd8ecf693 0xc003eb8d57 0xc003eb8d58}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003eb8dc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 4 08:15:31.489: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jul 4 08:15:31.490: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-8705 /apis/apps/v1/namespaces/deployment-8705/replicasets/test-recreate-deployment-799c574856 6a315daa-4aaf-42a5-b6b5-377fe4b8b57d 5491 2 2020-07-04 08:15:26 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 8d8745f8-17a5-46c4-b3f7-0c7cd8ecf693 0xc003eb8e37 0xc003eb8e38}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003eb8eb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 4 08:15:31.493: INFO: Pod "test-recreate-deployment-5f94c574ff-qnv47" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-qnv47 test-recreate-deployment-5f94c574ff- deployment-8705 /api/v1/namespaces/deployment-8705/pods/test-recreate-deployment-5f94c574ff-qnv47 e97b7837-c805-4922-a64e-93a7660fa950 5502 0 2020-07-04 08:15:30 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 4eafb608-0fea-43e2-9eb1-aa5e00e0e53c 0xc003eb9357 0xc003eb9358}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r67fs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r67fs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r67fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 08:15:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 08:15:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 08:15:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-04 08:15:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-07-04 08:15:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:15:31.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8705" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":20,"skipped":423,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:15:31.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-fa2fcb38-983a-4b3b-8bf6-1c27252785d7 STEP: Creating a pod to test consume configMaps Jul 4 08:15:31.684: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e587e055-1a0c-43ee-ae8f-cd729c84bd1f" in namespace "projected-8265" to be "success or failure" Jul 4 08:15:31.790: INFO: Pod "pod-projected-configmaps-e587e055-1a0c-43ee-ae8f-cd729c84bd1f": Phase="Pending", Reason="", readiness=false. Elapsed: 105.844061ms Jul 4 08:15:33.795: INFO: Pod "pod-projected-configmaps-e587e055-1a0c-43ee-ae8f-cd729c84bd1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110298436s Jul 4 08:15:35.813: INFO: Pod "pod-projected-configmaps-e587e055-1a0c-43ee-ae8f-cd729c84bd1f": Phase="Running", Reason="", readiness=true. Elapsed: 4.128407807s Jul 4 08:15:37.921: INFO: Pod "pod-projected-configmaps-e587e055-1a0c-43ee-ae8f-cd729c84bd1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.236804786s STEP: Saw pod success Jul 4 08:15:37.921: INFO: Pod "pod-projected-configmaps-e587e055-1a0c-43ee-ae8f-cd729c84bd1f" satisfied condition "success or failure" Jul 4 08:15:37.924: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-e587e055-1a0c-43ee-ae8f-cd729c84bd1f container projected-configmap-volume-test: STEP: delete the pod Jul 4 08:15:38.541: INFO: Waiting for pod pod-projected-configmaps-e587e055-1a0c-43ee-ae8f-cd729c84bd1f to disappear Jul 4 08:15:38.546: INFO: Pod pod-projected-configmaps-e587e055-1a0c-43ee-ae8f-cd729c84bd1f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:15:38.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8265" for this suite. • [SLOW TEST:6.989 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":442,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:15:38.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 4 08:15:38.665: INFO: Waiting up to 5m0s for pod "pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f" in namespace "emptydir-7939" to be "success or failure" Jul 4 08:15:38.674: INFO: Pod "pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.121758ms Jul 4 08:15:40.677: INFO: Pod "pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01243045s Jul 4 08:15:42.681: INFO: Pod "pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016657379s Jul 4 08:15:44.963: INFO: Pod "pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.298474311s Jul 4 08:15:46.967: INFO: Pod "pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f": Phase="Running", Reason="", readiness=true. Elapsed: 8.302131149s Jul 4 08:15:48.971: INFO: Pod "pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.306335955s STEP: Saw pod success Jul 4 08:15:48.971: INFO: Pod "pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f" satisfied condition "success or failure" Jul 4 08:15:48.974: INFO: Trying to get logs from node jerma-worker pod pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f container test-container: STEP: delete the pod Jul 4 08:15:49.139: INFO: Waiting for pod pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f to disappear Jul 4 08:15:49.172: INFO: Pod pod-456a06b7-a404-454f-9d4f-0f81c04c9f6f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:15:49.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7939" for this suite. • [SLOW TEST:10.626 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":453,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:15:49.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-1505753d-d1c4-44b0-955a-60d7f432faaf in namespace container-probe-88 Jul 4 08:15:53.607: INFO: Started pod liveness-1505753d-d1c4-44b0-955a-60d7f432faaf in namespace container-probe-88 STEP: checking the pod's current state and verifying that restartCount is present Jul 4 08:15:53.712: INFO: Initial restart count of pod liveness-1505753d-d1c4-44b0-955a-60d7f432faaf is 0 Jul 4 08:16:15.771: INFO: Restart count of pod container-probe-88/liveness-1505753d-d1c4-44b0-955a-60d7f432faaf is now 1 (22.058958774s elapsed) Jul 4 08:16:35.814: INFO: Restart count of pod container-probe-88/liveness-1505753d-d1c4-44b0-955a-60d7f432faaf is now 2 (42.102365941s elapsed) Jul 4 08:16:55.856: INFO: Restart count of pod container-probe-88/liveness-1505753d-d1c4-44b0-955a-60d7f432faaf is now 3 (1m2.144388316s elapsed) Jul 4 08:17:15.899: INFO: Restart count of pod container-probe-88/liveness-1505753d-d1c4-44b0-955a-60d7f432faaf is now 4 (1m22.187113487s elapsed) Jul 4 08:18:26.803: INFO: Restart count of pod container-probe-88/liveness-1505753d-d1c4-44b0-955a-60d7f432faaf is now 5 (2m33.091144687s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:18:26.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-88" for this suite. • [SLOW TEST:157.655 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":462,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:18:26.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jul 4 08:18:27.251: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-533 /api/v1/namespaces/watch-533/configmaps/e2e-watch-test-watch-closed 3fb8f063-ba6e-42eb-99dc-2c93e14522f1 6136 0 2020-07-04 08:18:26 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 4 08:18:27.251: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-533 /api/v1/namespaces/watch-533/configmaps/e2e-watch-test-watch-closed 3fb8f063-ba6e-42eb-99dc-2c93e14522f1 6137 0 2020-07-04 08:18:26 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jul 4 08:18:27.299: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-533 /api/v1/namespaces/watch-533/configmaps/e2e-watch-test-watch-closed 3fb8f063-ba6e-42eb-99dc-2c93e14522f1 6139 0 2020-07-04 08:18:26 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 4 08:18:27.299: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-533 /api/v1/namespaces/watch-533/configmaps/e2e-watch-test-watch-closed 3fb8f063-ba6e-42eb-99dc-2c93e14522f1 6141 0 2020-07-04 08:18:26 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:18:27.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-533" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":24,"skipped":475,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:18:27.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 4 08:18:27.444: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a4f1895e-6edb-4959-8df9-e9cdc7aaf3ef" in namespace "projected-1025" to be "success or failure" Jul 4 08:18:27.465: INFO: Pod "downwardapi-volume-a4f1895e-6edb-4959-8df9-e9cdc7aaf3ef": Phase="Pending", Reason="", readiness=false. Elapsed: 21.329018ms Jul 4 08:18:29.470: INFO: Pod "downwardapi-volume-a4f1895e-6edb-4959-8df9-e9cdc7aaf3ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025728052s Jul 4 08:18:31.474: INFO: Pod "downwardapi-volume-a4f1895e-6edb-4959-8df9-e9cdc7aaf3ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029795095s STEP: Saw pod success Jul 4 08:18:31.474: INFO: Pod "downwardapi-volume-a4f1895e-6edb-4959-8df9-e9cdc7aaf3ef" satisfied condition "success or failure" Jul 4 08:18:31.477: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a4f1895e-6edb-4959-8df9-e9cdc7aaf3ef container client-container: STEP: delete the pod Jul 4 08:18:31.614: INFO: Waiting for pod downwardapi-volume-a4f1895e-6edb-4959-8df9-e9cdc7aaf3ef to disappear Jul 4 08:18:31.623: INFO: Pod downwardapi-volume-a4f1895e-6edb-4959-8df9-e9cdc7aaf3ef no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:18:31.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1025" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":478,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:18:31.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 4 08:18:31.709: INFO: Waiting up to 5m0s for pod "pod-8c50079b-cff6-47b0-8c9a-bac788186422" in namespace "emptydir-3112" to be "success or failure" Jul 4 08:18:31.779: INFO: Pod "pod-8c50079b-cff6-47b0-8c9a-bac788186422": Phase="Pending", Reason="", readiness=false. Elapsed: 70.039079ms Jul 4 08:18:33.810: INFO: Pod "pod-8c50079b-cff6-47b0-8c9a-bac788186422": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100886133s Jul 4 08:18:35.814: INFO: Pod "pod-8c50079b-cff6-47b0-8c9a-bac788186422": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105225469s STEP: Saw pod success Jul 4 08:18:35.815: INFO: Pod "pod-8c50079b-cff6-47b0-8c9a-bac788186422" satisfied condition "success or failure" Jul 4 08:18:35.818: INFO: Trying to get logs from node jerma-worker pod pod-8c50079b-cff6-47b0-8c9a-bac788186422 container test-container: STEP: delete the pod Jul 4 08:18:35.851: INFO: Waiting for pod pod-8c50079b-cff6-47b0-8c9a-bac788186422 to disappear Jul 4 08:18:35.880: INFO: Pod pod-8c50079b-cff6-47b0-8c9a-bac788186422 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:18:35.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3112" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":480,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:18:35.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jul 4 08:18:36.037: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jul 4 08:18:53.171: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:18:53.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6676" for this suite. • [SLOW TEST:17.238 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":500,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:18:53.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:18:53.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8111" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":28,"skipped":510,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:18:53.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-9ttt STEP: Creating a pod to test atomic-volume-subpath Jul 4 08:18:53.404: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-9ttt" in namespace "subpath-775" to be "success or failure" Jul 4 08:18:53.412: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120847ms Jul 4 08:18:55.416: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01181068s Jul 4 08:18:57.418: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014424763s Jul 4 08:18:59.451: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 6.047201387s Jul 4 08:19:01.455: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 8.051435647s Jul 4 08:19:03.460: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 10.055865628s Jul 4 08:19:05.463: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 12.059341504s Jul 4 08:19:07.466: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 14.062602264s Jul 4 08:19:09.470: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 16.066378414s Jul 4 08:19:11.474: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 18.069803906s Jul 4 08:19:14.918: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 21.514574384s Jul 4 08:19:16.921: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 23.517493274s Jul 4 08:19:18.925: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 25.521005189s Jul 4 08:19:20.928: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 27.524440943s Jul 4 08:19:24.799: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 31.395487442s Jul 4 08:19:27.546: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 34.141854672s Jul 4 08:19:29.550: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Running", Reason="", readiness=true. Elapsed: 36.145874315s Jul 4 08:19:31.553: INFO: Pod "pod-subpath-test-downwardapi-9ttt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.148915399s STEP: Saw pod success Jul 4 08:19:31.553: INFO: Pod "pod-subpath-test-downwardapi-9ttt" satisfied condition "success or failure" Jul 4 08:19:31.555: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-9ttt container test-container-subpath-downwardapi-9ttt: STEP: delete the pod Jul 4 08:19:31.782: INFO: Waiting for pod pod-subpath-test-downwardapi-9ttt to disappear Jul 4 08:19:31.834: INFO: Pod pod-subpath-test-downwardapi-9ttt no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-9ttt Jul 4 08:19:31.834: INFO: Deleting pod "pod-subpath-test-downwardapi-9ttt" in namespace "subpath-775" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:19:31.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-775" for this suite. • [SLOW TEST:38.894 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":29,"skipped":519,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:19:32.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 4 08:19:54.943: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 4 08:19:54.952: INFO: Pod pod-with-poststart-http-hook still exists Jul 4 08:19:56.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 4 08:19:56.955: INFO: Pod pod-with-poststart-http-hook still exists Jul 4 08:19:58.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 4 08:19:58.956: INFO: Pod pod-with-poststart-http-hook still exists Jul 4 08:20:00.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 4 08:20:01.051: INFO: Pod pod-with-poststart-http-hook still exists Jul 4 08:20:02.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 4 08:20:03.955: INFO: Pod pod-with-poststart-http-hook still exists Jul 4 08:20:04.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 4 08:20:05.456: INFO: Pod pod-with-poststart-http-hook still exists Jul 4 08:20:06.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 4 08:20:06.956: INFO: Pod pod-with-poststart-http-hook still exists Jul 4 08:20:08.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 4 08:20:09.781: INFO: Pod pod-with-poststart-http-hook still exists Jul 4 08:20:10.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 4 08:20:10.956: INFO: Pod pod-with-poststart-http-hook still exists Jul 4 08:20:12.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 4 08:20:15.645: INFO: Pod pod-with-poststart-http-hook still exists Jul 4 08:20:16.952: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 4 08:20:17.794: INFO: Pod pod-with-poststart-http-hook still exists Jul 4 08:20:18.953: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 4 08:20:19.027: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:20:19.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9017" for this suite. • [SLOW TEST:46.848 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":570,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:20:19.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7961.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7961.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7961.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7961.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7961.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7961.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 253.53.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.53.253_udp@PTR;check="$$(dig +tcp +noall +answer +search 253.53.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.53.253_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7961.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7961.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7961.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7961.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7961.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7961.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7961.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 253.53.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.53.253_udp@PTR;check="$$(dig +tcp +noall +answer +search 253.53.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.53.253_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 4 08:20:31.853: INFO: Unable to read wheezy_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:31.855: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:31.858: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:31.861: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:31.879: INFO: Unable to read jessie_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:31.882: INFO: Unable to read jessie_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:31.884: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:31.887: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:31.902: INFO: Lookups using dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43 failed for: [wheezy_udp@dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_udp@dns-test-service.dns-7961.svc.cluster.local jessie_tcp@dns-test-service.dns-7961.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local] Jul 4 08:20:36.906: INFO: Unable to read wheezy_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:36.908: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:36.911: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:36.914: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:36.942: INFO: Unable to read jessie_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:36.944: INFO: Unable to read jessie_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:36.947: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:36.949: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:36.966: INFO: Lookups using dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43 failed for: [wheezy_udp@dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_udp@dns-test-service.dns-7961.svc.cluster.local jessie_tcp@dns-test-service.dns-7961.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local] Jul 4 08:20:41.906: INFO: Unable to read wheezy_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:41.909: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:41.912: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:41.915: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:41.932: INFO: Unable to read jessie_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:41.934: INFO: Unable to read jessie_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:41.936: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:41.938: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:41.950: INFO: Lookups using dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43 failed for: [wheezy_udp@dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_udp@dns-test-service.dns-7961.svc.cluster.local jessie_tcp@dns-test-service.dns-7961.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local] Jul 4 08:20:46.906: INFO: Unable to read wheezy_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:46.908: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:46.910: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:46.913: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:47.257: INFO: Unable to read jessie_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:47.260: INFO: Unable to read jessie_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:47.263: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:47.266: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:47.280: INFO: Lookups using dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43 failed for: [wheezy_udp@dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_udp@dns-test-service.dns-7961.svc.cluster.local jessie_tcp@dns-test-service.dns-7961.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local] Jul 4 08:20:51.906: INFO: Unable to read wheezy_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:51.908: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:51.910: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:51.913: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:51.930: INFO: Unable to read jessie_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:51.932: INFO: Unable to read jessie_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:51.934: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:51.936: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:51.948: INFO: Lookups using dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43 failed for: [wheezy_udp@dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_udp@dns-test-service.dns-7961.svc.cluster.local jessie_tcp@dns-test-service.dns-7961.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local] Jul 4 08:20:56.906: INFO: Unable to read wheezy_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:56.943: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:56.946: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:56.954: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:57.149: INFO: Unable to read jessie_udp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:57.151: INFO: Unable to read jessie_tcp@dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:57.154: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:57.156: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local from pod dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43: the server could not find the requested resource (get pods dns-test-1a9ac933-d954-42bb-87da-efac96d22b43) Jul 4 08:20:57.170: INFO: Lookups using dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43 failed for: [wheezy_udp@dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@dns-test-service.dns-7961.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_udp@dns-test-service.dns-7961.svc.cluster.local jessie_tcp@dns-test-service.dns-7961.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7961.svc.cluster.local] Jul 4 08:21:03.188: INFO: DNS probes using dns-7961/dns-test-1a9ac933-d954-42bb-87da-efac96d22b43 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:21:10.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7961" for this suite. • [SLOW TEST:52.263 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":31,"skipped":576,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:21:11.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 4 08:21:12.322: INFO: Waiting up to 5m0s for pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba" in namespace "security-context-test-7960" to be "success or failure" Jul 4 08:21:12.344: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba": Phase="Pending", Reason="", readiness=false. Elapsed: 21.744161ms Jul 4 08:21:14.375: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052612644s Jul 4 08:21:16.577: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.255200983s Jul 4 08:21:18.687: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.365128484s Jul 4 08:21:20.692: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.369319494s Jul 4 08:21:22.782: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba": Phase="Pending", Reason="", readiness=false. Elapsed: 10.460207292s Jul 4 08:21:24.823: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba": Phase="Pending", Reason="", readiness=false. Elapsed: 12.50114421s Jul 4 08:21:27.162: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba": Phase="Running", Reason="", readiness=true. Elapsed: 14.839755481s Jul 4 08:21:29.165: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.842939255s Jul 4 08:21:29.165: INFO: Pod "busybox-user-65534-406b70a5-5972-4bf2-b1a7-6cb0f44160ba" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:21:29.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7960" for this suite. • [SLOW TEST:17.857 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":585,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:21:29.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 4 08:21:31.680: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 4 08:21:33.852: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:21:37.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:21:39.879: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:21:42.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:21:44.100: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:21:46.338: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:21:47.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:21:50.088: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:21:51.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:21:53.927: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:21:57.573: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:21:58.083: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:22:00.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:22:01.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447691, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 4 08:22:04.897: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:22:05.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-885" for this suite. STEP: Destroying namespace "webhook-885-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:36.137 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":33,"skipped":616,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:22:05.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Jul 4 08:22:05.376: INFO: Waiting up to 5m0s for pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910" in namespace "containers-8710" to be "success or failure" Jul 4 08:22:05.380: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 4.419711ms Jul 4 08:22:08.381: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 3.005254486s Jul 4 08:22:10.531: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 5.155417609s Jul 4 08:22:12.572: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 7.196636758s Jul 4 08:22:15.628: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 10.251878109s Jul 4 08:22:17.631: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 12.255031471s Jul 4 08:22:19.634: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 14.257801759s Jul 4 08:22:22.196: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 16.820420033s Jul 4 08:22:24.741: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 19.365657317s Jul 4 08:22:27.700: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 22.324040619s Jul 4 08:22:29.702: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Pending", Reason="", readiness=false. Elapsed: 24.326629361s Jul 4 08:22:31.706: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 26.330010812s Jul 4 08:22:34.634: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 29.258282746s Jul 4 08:22:36.638: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 31.261707111s Jul 4 08:22:38.813: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 33.437644523s Jul 4 08:22:40.816: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 35.440236865s Jul 4 08:22:42.819: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 37.443133099s Jul 4 08:22:44.823: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 39.446777767s Jul 4 08:22:47.000: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 41.623741327s Jul 4 08:22:50.245: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 44.869619614s Jul 4 08:22:52.248: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 46.872379581s Jul 4 08:22:54.502: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 49.126033232s Jul 4 08:22:59.694: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Running", Reason="", readiness=true. Elapsed: 54.317950318s Jul 4 08:23:01.696: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910": Phase="Succeeded", Reason="", readiness=false. Elapsed: 56.320639588s STEP: Saw pod success Jul 4 08:23:01.696: INFO: Pod "client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910" satisfied condition "success or failure" Jul 4 08:23:01.698: INFO: Trying to get logs from node jerma-worker pod client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910 container test-container: STEP: delete the pod Jul 4 08:23:02.371: INFO: Waiting for pod client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910 to disappear Jul 4 08:23:02.628: INFO: Pod client-containers-0b734d03-0381-41f7-9f8b-7c08e3957910 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:23:02.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8710" for this suite. • [SLOW TEST:57.717 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":669,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:23:03.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-b843fc32-e8fe-4b7f-83c8-aa010140181d STEP: Creating a pod to test consume configMaps Jul 4 08:23:03.221: INFO: Waiting up to 5m0s for pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf" in namespace "configmap-8300" to be "success or failure" Jul 4 08:23:03.238: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.417507ms Jul 4 08:23:06.196: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.974989637s Jul 4 08:23:08.352: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.130977561s Jul 4 08:23:10.418: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 7.197065338s Jul 4 08:23:12.422: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 9.200578716s Jul 4 08:23:14.437: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 11.21561948s Jul 4 08:23:16.440: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 13.218976227s Jul 4 08:23:19.508: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.286917154s Jul 4 08:23:21.544: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.322585748s Jul 4 08:23:24.038: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 20.816885856s Jul 4 08:23:26.041: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 22.819537722s Jul 4 08:23:28.044: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 24.822977224s Jul 4 08:23:33.578: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 30.356649883s Jul 4 08:23:35.610: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 32.388713268s Jul 4 08:23:37.724: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 34.503001809s Jul 4 08:23:39.727: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Pending", Reason="", readiness=false. Elapsed: 36.506056172s Jul 4 08:23:41.731: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Running", Reason="", readiness=true. Elapsed: 38.50963938s Jul 4 08:23:44.264: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 41.042379108s STEP: Saw pod success Jul 4 08:23:44.264: INFO: Pod "pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf" satisfied condition "success or failure" Jul 4 08:23:44.432: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf container configmap-volume-test: STEP: delete the pod Jul 4 08:23:44.928: INFO: Waiting for pod pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf to disappear Jul 4 08:23:45.305: INFO: Pod pod-configmaps-e40e8ee6-07dc-4a6f-a1c6-37df050525bf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:23:45.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8300" for this suite. • [SLOW TEST:42.283 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":676,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:23:45.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 4 08:23:47.834: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 4 08:23:49.928: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:23:52.928: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:23:54.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:23:56.246: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:23:57.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:00.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:02.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:04.256: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:07.000: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:08.007: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:10.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:11.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:14.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:16.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:17.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:19.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:21.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:24.250: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:27.936: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:30.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:31.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:33.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:35.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:37.934: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:40.019: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:41.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:44.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:46.013: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:48.092: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 4 08:24:50.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447828, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729447827, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 4 08:24:55.295: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:25:09.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9728" for this suite. STEP: Destroying namespace "webhook-9728-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:87.918 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":36,"skipped":685,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:25:13.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jul 4 08:25:13.398: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-a 5f27f2d4-c6d7-41f4-9dbb-45c398124264 7510 0 2020-07-04 08:25:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 4 08:25:13.398: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-a 5f27f2d4-c6d7-41f4-9dbb-45c398124264 7510 0 2020-07-04 08:25:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jul 4 08:25:23.404: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-a 5f27f2d4-c6d7-41f4-9dbb-45c398124264 7560 0 2020-07-04 08:25:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jul 4 08:25:23.404: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-a 5f27f2d4-c6d7-41f4-9dbb-45c398124264 7560 0 2020-07-04 08:25:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jul 4 08:25:33.409: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-a 5f27f2d4-c6d7-41f4-9dbb-45c398124264 7590 0 2020-07-04 08:25:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 4 08:25:33.409: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-a 5f27f2d4-c6d7-41f4-9dbb-45c398124264 7590 0 2020-07-04 08:25:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jul 4 08:25:43.739: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-a 5f27f2d4-c6d7-41f4-9dbb-45c398124264 7620 0 2020-07-04 08:25:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 4 08:25:43.740: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-a 5f27f2d4-c6d7-41f4-9dbb-45c398124264 7620 0 2020-07-04 08:25:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jul 4 08:25:53.744: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-b 96aaacf1-88e9-4b5d-91cd-a32be58a2b9a 7645 0 2020-07-04 08:25:53 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 4 08:25:53.744: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-b 96aaacf1-88e9-4b5d-91cd-a32be58a2b9a 7645 0 2020-07-04 08:25:53 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jul 4 08:26:05.044: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-b 96aaacf1-88e9-4b5d-91cd-a32be58a2b9a 7671 0 2020-07-04 08:25:53 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 4 08:26:05.044: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6715 /api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-configmap-b 96aaacf1-88e9-4b5d-91cd-a32be58a2b9a 7671 0 2020-07-04 08:25:53 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:26:15.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6715" for this suite. • [SLOW TEST:62.304 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":37,"skipped":695,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:26:15.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 4 08:26:16.556: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247" in namespace "security-context-test-2830" to be "success or failure" Jul 4 08:26:17.370: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 813.956195ms Jul 4 08:26:19.553: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 2.997317846s Jul 4 08:26:21.660: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 5.104798395s Jul 4 08:26:23.664: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 7.108150193s Jul 4 08:26:26.724: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 10.168442492s Jul 4 08:26:28.780: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 12.224004705s Jul 4 08:26:30.783: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 14.227027409s Jul 4 08:26:33.232: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 16.67667268s Jul 4 08:26:35.235: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 18.679743817s Jul 4 08:26:37.238: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 20.68230915s Jul 4 08:26:40.014: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 23.45888356s Jul 4 08:26:42.020: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 25.464331458s Jul 4 08:26:44.296: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 27.740784439s Jul 4 08:26:46.300: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 29.74400619s Jul 4 08:26:48.355: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Pending", Reason="", readiness=false. Elapsed: 31.799809623s Jul 4 08:26:50.358: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Running", Reason="", readiness=true. Elapsed: 33.802492652s Jul 4 08:26:52.379: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Running", Reason="", readiness=true. Elapsed: 35.823661787s Jul 4 08:26:55.492: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.936387848s Jul 4 08:26:55.492: INFO: Pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247" satisfied condition "success or failure" Jul 4 08:26:56.164: INFO: Got logs for pod "busybox-privileged-false-994c7625-ec16-424d-a99f-ab3a38c4e247": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:26:56.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2830" for this suite. • [SLOW TEST:40.732 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":726,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:26:56.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-a2ab9ff6-076f-4080-8458-d50f09b6af4c STEP: Creating a pod to test consume configMaps Jul 4 08:26:56.397: INFO: Waiting up to 5m0s for pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749" in namespace "configmap-929" to be "success or failure" Jul 4 08:26:56.404: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Pending", Reason="", readiness=false. Elapsed: 7.117907ms Jul 4 08:26:59.067: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Pending", Reason="", readiness=false. Elapsed: 2.670242948s Jul 4 08:27:01.070: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Pending", Reason="", readiness=false. Elapsed: 4.673018882s Jul 4 08:27:03.202: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Pending", Reason="", readiness=false. Elapsed: 6.805282085s Jul 4 08:27:05.260: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Pending", Reason="", readiness=false. Elapsed: 8.863229342s Jul 4 08:27:07.602: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Pending", Reason="", readiness=false. Elapsed: 11.205251135s Jul 4 08:27:09.745: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Running", Reason="", readiness=true. Elapsed: 13.347960007s Jul 4 08:27:11.801: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Running", Reason="", readiness=true. Elapsed: 15.404690069s Jul 4 08:27:14.196: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Running", Reason="", readiness=true. Elapsed: 17.799488292s Jul 4 08:27:16.260: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.863682508s STEP: Saw pod success Jul 4 08:27:16.260: INFO: Pod "pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749" satisfied condition "success or failure" Jul 4 08:27:16.262: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749 container configmap-volume-test: STEP: delete the pod Jul 4 08:27:16.317: INFO: Waiting for pod pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749 to disappear Jul 4 08:27:17.014: INFO: Pod pod-configmaps-c42b4a1a-e67c-485b-b618-323f4a261749 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:27:17.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-929" for this suite. • [SLOW TEST:20.752 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":726,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:27:17.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-f503f394-4652-4318-b3f8-0bbb6f871b35 STEP: Creating a pod to test consume configMaps Jul 4 08:27:18.216: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d" in namespace "projected-4408" to be "success or failure" Jul 4 08:27:18.496: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 280.354568ms Jul 4 08:27:20.499: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.283034269s Jul 4 08:27:22.584: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.368008926s Jul 4 08:27:25.240: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.024647193s Jul 4 08:27:27.328: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.112010757s Jul 4 08:27:29.427: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.211613413s Jul 4 08:27:31.896: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.680109905s Jul 4 08:27:33.907: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.691268193s Jul 4 08:27:36.158: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.942074662s Jul 4 08:27:38.602: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.386609177s Jul 4 08:27:40.724: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.50790591s Jul 4 08:27:42.727: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 24.511453113s Jul 4 08:27:44.731: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.515071893s Jul 4 08:27:46.735: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.519078346s Jul 4 08:27:50.148: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 31.932202993s Jul 4 08:27:52.151: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 33.93527763s Jul 4 08:27:55.450: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 37.234564136s Jul 4 08:27:57.453: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 39.237563776s Jul 4 08:27:59.590: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 41.374790061s Jul 4 08:28:01.594: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 43.377963375s Jul 4 08:28:03.597: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 45.38136824s Jul 4 08:28:05.600: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 47.384326573s Jul 4 08:28:08.057: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 49.841776915s Jul 4 08:28:10.830: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 52.61480792s Jul 4 08:28:15.064: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 56.848379591s Jul 4 08:28:17.067: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 58.851676401s Jul 4 08:28:19.196: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.980398309s Jul 4 08:28:21.396: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.180135513s Jul 4 08:28:23.399: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.183628752s Jul 4 08:28:25.403: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.187097163s Jul 4 08:28:28.239: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.023083128s Jul 4 08:28:30.406: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.190218982s Jul 4 08:28:32.409: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.193335392s Jul 4 08:28:34.412: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.196490651s Jul 4 08:28:36.415: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.199282025s Jul 4 08:28:39.106: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.890059323s Jul 4 08:28:41.503: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m23.28691103s Jul 4 08:28:44.220: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.004466295s Jul 4 08:28:46.282: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.065926971s Jul 4 08:28:48.437: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.221507712s Jul 4 08:28:50.508: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.292043325s Jul 4 08:28:53.156: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Running", Reason="", readiness=true. Elapsed: 1m34.939928941s Jul 4 08:28:55.158: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Running", Reason="", readiness=true. Elapsed: 1m36.942744676s Jul 4 08:28:57.252: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Running", Reason="", readiness=true. Elapsed: 1m39.036401261s Jul 4 08:28:59.255: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m41.039564341s STEP: Saw pod success Jul 4 08:28:59.255: INFO: Pod "pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d" satisfied condition "success or failure" Jul 4 08:28:59.258: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d container projected-configmap-volume-test: STEP: delete the pod Jul 4 08:29:00.589: INFO: Waiting for pod pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d to disappear Jul 4 08:29:00.867: INFO: Pod pod-projected-configmaps-96e96453-7dcc-4808-b068-3ae3e709ef7d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:29:00.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4408" for this suite. • [SLOW TEST:104.137 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":747,"failed":0} SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:29:01.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:30:25.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6063" for this suite. STEP: Destroying namespace "nsdeletetest-6936" for this suite. Jul 4 08:30:26.593: INFO: Namespace nsdeletetest-6936 was already deleted STEP: Destroying namespace "nsdeletetest-3735" for this suite. • [SLOW TEST:86.759 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":41,"skipped":749,"failed":0} SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:30:27.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:30:50.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4878" for this suite. STEP: Destroying namespace "nsdeletetest-3385" for this suite. Jul 4 08:30:50.432: INFO: Namespace nsdeletetest-3385 was already deleted STEP: Destroying namespace "nsdeletetest-7685" for this suite. • [SLOW TEST:22.520 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":42,"skipped":751,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:30:50.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:31:08.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9399" for this suite. • [SLOW TEST:18.048 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":43,"skipped":775,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:31:08.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-9383b7d9-9e78-47a9-9490-9b0bf8760bb9 in namespace container-probe-6294 Jul 4 08:31:54.641: INFO: Started pod test-webserver-9383b7d9-9e78-47a9-9490-9b0bf8760bb9 in namespace container-probe-6294 STEP: checking the pod's current state and verifying that restartCount is present Jul 4 08:31:54.643: INFO: Initial restart count of pod test-webserver-9383b7d9-9e78-47a9-9490-9b0bf8760bb9 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:35:56.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6294" for this suite. • [SLOW TEST:289.351 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":789,"failed":0} [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:35:57.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jul 4 08:36:00.804: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:36:49.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7571" for this suite. • [SLOW TEST:51.285 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":45,"skipped":789,"failed":0} [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:36:49.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:37:01.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-55" for this suite. • [SLOW TEST:12.421 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":46,"skipped":789,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:37:01.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-d6f88050-3aff-4430-9294-2c41f9a89544 STEP: Creating a pod to test consume secrets Jul 4 08:37:03.295: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56" in namespace "projected-6803" to be "success or failure" Jul 4 08:37:03.304: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.449826ms Jul 4 08:37:05.779: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.483576359s Jul 4 08:37:07.812: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.516941545s Jul 4 08:37:09.829: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.533808403s Jul 4 08:37:11.834: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.538337777s Jul 4 08:37:13.837: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 10.541919415s Jul 4 08:37:15.885: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 12.590044518s Jul 4 08:37:17.889: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 14.593587131s Jul 4 08:37:20.248: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 16.952410761s Jul 4 08:37:22.251: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 18.955479667s Jul 4 08:37:24.254: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 20.958942625s Jul 4 08:37:27.264: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 23.968784869s Jul 4 08:37:29.267: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 25.971875842s Jul 4 08:37:31.412: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 28.116815155s Jul 4 08:37:33.416: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 30.121078888s Jul 4 08:37:35.420: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 32.124219499s Jul 4 08:37:37.471: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 34.176106072s Jul 4 08:37:39.914: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 36.619086573s Jul 4 08:37:41.917: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 38.622015455s Jul 4 08:37:43.921: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 40.625565971s Jul 4 08:37:45.988: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 42.693093681s Jul 4 08:37:48.527: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 45.231588176s Jul 4 08:37:50.529: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 47.233874718s Jul 4 08:37:52.610: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 49.314621339s Jul 4 08:37:55.193: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 51.89800892s Jul 4 08:37:58.150: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 54.854452693s Jul 4 08:38:00.763: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 57.467541366s Jul 4 08:38:02.766: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 59.470684964s Jul 4 08:38:04.770: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.474348843s Jul 4 08:38:07.174: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.878519983s Jul 4 08:38:09.176: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.880882065s Jul 4 08:38:11.272: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.976183391s Jul 4 08:38:13.366: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.071065669s Jul 4 08:38:15.370: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.074636732s Jul 4 08:38:17.492: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.196361153s Jul 4 08:38:19.515: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.22013626s Jul 4 08:38:21.932: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.636324133s Jul 4 08:38:23.935: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.640096994s Jul 4 08:38:25.947: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.651228379s Jul 4 08:38:28.827: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Pending", Reason="", readiness=false. Elapsed: 1m25.531792504s Jul 4 08:38:30.830: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Running", Reason="", readiness=true. Elapsed: 1m27.53437261s Jul 4 08:38:32.833: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m29.538150272s STEP: Saw pod success Jul 4 08:38:32.834: INFO: Pod "pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56" satisfied condition "success or failure" Jul 4 08:38:32.836: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56 container projected-secret-volume-test: STEP: delete the pod Jul 4 08:38:32.879: INFO: Waiting for pod pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56 to disappear Jul 4 08:38:32.910: INFO: Pod pod-projected-secrets-6d0dd84d-d96b-4403-a518-9564fa8bcd56 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:38:32.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6803" for this suite. • [SLOW TEST:91.389 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":831,"failed":0} S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:38:32.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jul 4 08:38:33.000: INFO: Waiting up to 5m0s for pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8" in namespace "downward-api-338" to be "success or failure" Jul 4 08:38:33.004: INFO: Pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195701ms Jul 4 08:38:35.097: INFO: Pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097346239s Jul 4 08:38:37.107: INFO: Pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106615252s Jul 4 08:38:39.154: INFO: Pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.153938176s Jul 4 08:38:41.578: INFO: Pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8": Phase="Running", Reason="", readiness=true. Elapsed: 8.577683483s Jul 4 08:38:43.581: INFO: Pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8": Phase="Running", Reason="", readiness=true. Elapsed: 10.580909897s Jul 4 08:38:45.584: INFO: Pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8": Phase="Running", Reason="", readiness=true. Elapsed: 12.583621053s Jul 4 08:38:47.592: INFO: Pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.591757001s STEP: Saw pod success Jul 4 08:38:47.592: INFO: Pod "downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8" satisfied condition "success or failure" Jul 4 08:38:47.654: INFO: Trying to get logs from node jerma-worker pod downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8 container dapi-container: STEP: delete the pod Jul 4 08:38:49.386: INFO: Waiting for pod downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8 to disappear Jul 4 08:38:49.467: INFO: Pod downward-api-e9f71b02-1ffd-4d89-9dae-30054314c6c8 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:38:49.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-338" for this suite. • [SLOW TEST:16.541 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":832,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:38:49.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod Jul 4 08:38:50.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-6187 -- logs-generator --log-lines-total 100 --run-duration 20s' Jul 4 08:38:54.161: INFO: stderr: "" Jul 4 08:38:54.161: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Jul 4 08:38:54.161: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jul 4 08:38:54.161: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6187" to be "running and ready, or succeeded" Jul 4 08:38:54.168: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101819ms Jul 4 08:38:57.608: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.446821227s Jul 4 08:38:59.611: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 5.44986102s Jul 4 08:39:02.076: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.914824611s Jul 4 08:39:04.080: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 9.91887938s Jul 4 08:39:06.253: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 12.091515367s Jul 4 08:39:09.273: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 15.111373829s Jul 4 08:39:13.004: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 18.84244453s Jul 4 08:39:16.044: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 21.882332599s Jul 4 08:39:18.935: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 24.773432606s Jul 4 08:39:21.535: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 27.373166029s Jul 4 08:39:23.655: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 29.493879235s Jul 4 08:39:25.659: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 31.497373804s Jul 4 08:39:28.343: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 34.181803418s Jul 4 08:39:31.399: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 37.23755318s Jul 4 08:39:33.606: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 39.444922255s Jul 4 08:39:35.708: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 41.546593035s Jul 4 08:39:38.601: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 44.439113428s Jul 4 08:39:40.603: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 46.441741016s Jul 4 08:39:42.644: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 48.482163513s Jul 4 08:39:45.194: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 51.032376776s Jul 4 08:39:47.665: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 53.503745354s Jul 4 08:39:50.896: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 56.735018541s Jul 4 08:39:52.900: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 58.738170258s Jul 4 08:39:55.000: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.83849196s Jul 4 08:39:58.192: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.030436921s Jul 4 08:40:00.195: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.033989016s Jul 4 08:40:02.199: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.037271688s Jul 4 08:40:04.203: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.04108096s Jul 4 08:40:06.551: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.389398494s Jul 4 08:40:09.526: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.364650676s Jul 4 08:40:11.530: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1m17.368967423s Jul 4 08:40:13.655: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.493610387s Jul 4 08:40:15.658: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 1m21.496281896s Jul 4 08:40:15.658: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jul 4 08:40:15.658: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jul 4 08:40:15.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6187' Jul 4 08:40:15.747: INFO: stderr: "" Jul 4 08:40:15.747: INFO: stdout: "I0704 08:40:14.993936 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/45l 492\nI0704 08:40:15.194046 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/jwj2 332\nI0704 08:40:15.394098 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/ksb 283\nI0704 08:40:15.594087 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/xc9l 249\n" Jul 4 08:40:17.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6187' Jul 4 08:40:17.853: INFO: stderr: "" Jul 4 08:40:17.853: INFO: stdout: "I0704 08:40:14.993936 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/45l 492\nI0704 08:40:15.194046 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/jwj2 332\nI0704 08:40:15.394098 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/ksb 283\nI0704 08:40:15.594087 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/xc9l 249\nI0704 08:40:15.794075 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/rrkd 552\nI0704 08:40:15.994105 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/6vjb 203\nI0704 08:40:16.194080 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/p9z9 290\nI0704 08:40:16.394078 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/29qn 585\nI0704 08:40:16.594097 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/dbqs 288\nI0704 08:40:16.794092 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/btj 387\nI0704 08:40:16.994063 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/frs 479\nI0704 08:40:17.194105 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/xn2k 493\nI0704 08:40:17.394082 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/4df 446\nI0704 08:40:17.594097 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/h8x 294\nI0704 08:40:17.794070 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/9cq 350\n" STEP: limiting log lines Jul 4 08:40:17.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6187 --tail=1' Jul 4 08:40:17.947: INFO: stderr: "" Jul 4 08:40:17.947: INFO: stdout: "I0704 08:40:17.794070 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/9cq 350\n" Jul 4 08:40:17.947: INFO: got output "I0704 08:40:17.794070 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/9cq 350\n" STEP: limiting log bytes Jul 4 08:40:17.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6187 --limit-bytes=1' Jul 4 08:40:18.031: INFO: stderr: "" Jul 4 08:40:18.031: INFO: stdout: "I" Jul 4 08:40:18.031: INFO: got output "I" STEP: exposing timestamps Jul 4 08:40:18.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6187 --tail=1 --timestamps' Jul 4 08:40:18.127: INFO: stderr: "" Jul 4 08:40:18.127: INFO: stdout: "2020-07-04T08:40:17.994184721Z I0704 08:40:17.994067 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/696b 474\n" Jul 4 08:40:18.127: INFO: got output "2020-07-04T08:40:17.994184721Z I0704 08:40:17.994067 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/696b 474\n" STEP: restricting to a time range Jul 4 08:40:20.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6187 --since=1s' Jul 4 08:40:20.739: INFO: stderr: "" Jul 4 08:40:20.739: INFO: stdout: "I0704 08:40:19.794084 1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/lzg 248\nI0704 08:40:19.994094 1 logs_generator.go:76] 25 GET /api/v1/namespaces/default/pods/87g 318\nI0704 08:40:20.194121 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/7c7q 287\nI0704 08:40:20.394194 1 logs_generator.go:76] 27 POST /api/v1/namespaces/kube-system/pods/lpp 388\nI0704 08:40:20.594068 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/8t6p 247\n" Jul 4 08:40:20.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6187 --since=24h' Jul 4 08:40:20.833: INFO: stderr: "" Jul 4 08:40:20.833: INFO: stdout: "I0704 08:40:14.993936 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/45l 492\nI0704 08:40:15.194046 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/jwj2 332\nI0704 08:40:15.394098 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/ksb 283\nI0704 08:40:15.594087 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/xc9l 249\nI0704 08:40:15.794075 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/rrkd 552\nI0704 08:40:15.994105 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/6vjb 203\nI0704 08:40:16.194080 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/p9z9 290\nI0704 08:40:16.394078 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/29qn 585\nI0704 08:40:16.594097 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/dbqs 288\nI0704 08:40:16.794092 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/btj 387\nI0704 08:40:16.994063 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/frs 479\nI0704 08:40:17.194105 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/xn2k 493\nI0704 08:40:17.394082 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/4df 446\nI0704 08:40:17.594097 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/h8x 294\nI0704 08:40:17.794070 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/9cq 350\nI0704 08:40:17.994067 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/696b 474\nI0704 08:40:18.194102 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/rhw 587\nI0704 08:40:18.394103 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/sgf 289\nI0704 08:40:18.594093 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/dkw 221\nI0704 08:40:18.794073 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/wwg 444\nI0704 08:40:18.994098 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/7jmk 201\nI0704 08:40:19.194106 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/q9w 375\nI0704 08:40:19.394067 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/89jc 572\nI0704 08:40:19.594063 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/4ql 476\nI0704 08:40:19.794084 1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/lzg 248\nI0704 08:40:19.994094 1 logs_generator.go:76] 25 GET /api/v1/namespaces/default/pods/87g 318\nI0704 08:40:20.194121 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/7c7q 287\nI0704 08:40:20.394194 1 logs_generator.go:76] 27 POST /api/v1/namespaces/kube-system/pods/lpp 388\nI0704 08:40:20.594068 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/8t6p 247\nI0704 08:40:20.794085 1 logs_generator.go:76] 29 POST /api/v1/namespaces/default/pods/9vs 333\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 Jul 4 08:40:20.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6187' Jul 4 08:41:18.777: INFO: stderr: "" Jul 4 08:41:18.777: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:41:18.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6187" for this suite. • [SLOW TEST:151.605 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":49,"skipped":853,"failed":0} [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:41:21.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 4 08:41:24.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5" in namespace "projected-3009" to be "success or failure" Jul 4 08:41:24.726: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 81.968555ms Jul 4 08:41:28.076: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.431817445s Jul 4 08:41:30.187: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.543333023s Jul 4 08:41:32.256: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.612427402s Jul 4 08:41:35.498: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.854619011s Jul 4 08:41:37.500: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.856666571s Jul 4 08:41:39.504: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.860074882s Jul 4 08:41:42.742: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.098634932s Jul 4 08:41:44.746: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.101784579s Jul 4 08:41:46.749: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.10547057s Jul 4 08:41:49.446: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.802679392s Jul 4 08:41:51.728: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Running", Reason="", readiness=true. Elapsed: 27.084747748s Jul 4 08:41:53.764: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Running", Reason="", readiness=true. Elapsed: 29.12029745s Jul 4 08:41:55.767: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Running", Reason="", readiness=true. Elapsed: 31.123463917s Jul 4 08:41:57.789: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Running", Reason="", readiness=true. Elapsed: 33.145115481s Jul 4 08:41:59.793: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Running", Reason="", readiness=true. Elapsed: 35.149488226s Jul 4 08:42:02.476: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Running", Reason="", readiness=true. Elapsed: 37.832536033s Jul 4 08:42:05.550: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.906705318s STEP: Saw pod success Jul 4 08:42:05.551: INFO: Pod "downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5" satisfied condition "success or failure" Jul 4 08:42:05.554: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5 container client-container: STEP: delete the pod Jul 4 08:42:06.698: INFO: Waiting for pod downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5 to disappear Jul 4 08:42:06.770: INFO: Pod downwardapi-volume-40beab96-e90b-4957-a97a-a1250c9942b5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:42:06.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3009" for this suite. • [SLOW TEST:45.696 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":853,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:42:06.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jul 4 08:42:08.513: INFO: Waiting up to 5m0s for pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc" in namespace "downward-api-2143" to be "success or failure" Jul 4 08:42:08.515: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118556ms Jul 4 08:42:10.518: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00460024s Jul 4 08:42:13.057: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.544002657s Jul 4 08:42:15.148: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.635082792s Jul 4 08:42:17.694: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.180575541s Jul 4 08:42:20.370: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.857049569s Jul 4 08:42:22.532: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.01876291s Jul 4 08:42:24.806: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.292846978s Jul 4 08:42:26.809: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.296189418s Jul 4 08:42:28.927: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.413564131s Jul 4 08:42:31.139: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.626273648s Jul 4 08:42:33.143: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 24.629546753s Jul 4 08:42:35.146: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 26.632733403s Jul 4 08:42:37.241: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 28.72790257s Jul 4 08:42:39.376: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.863187422s Jul 4 08:42:41.380: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 32.866653544s Jul 4 08:42:43.610: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 35.096592456s Jul 4 08:42:45.613: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.099815463s STEP: Saw pod success Jul 4 08:42:45.613: INFO: Pod "downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc" satisfied condition "success or failure" Jul 4 08:42:45.615: INFO: Trying to get logs from node jerma-worker2 pod downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc container dapi-container: STEP: delete the pod Jul 4 08:42:45.643: INFO: Waiting for pod downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc to disappear Jul 4 08:42:45.666: INFO: Pod downward-api-ba30ba26-13f1-472d-85a7-f3f56d88d3dc no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:42:45.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2143" for this suite. • [SLOW TEST:39.010 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":857,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:42:45.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:43:06.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5074" for this suite. • [SLOW TEST:21.163 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":52,"skipped":867,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:43:06.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Jul 4 08:43:07.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4117' Jul 4 08:43:07.517: INFO: stderr: "" Jul 4 08:43:07.517: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 4 08:43:07.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4117' Jul 4 08:43:07.618: INFO: stderr: "" Jul 4 08:43:07.618: INFO: stdout: "update-demo-nautilus-pr7zz update-demo-nautilus-wm2rs " Jul 4 08:43:07.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pr7zz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4117' Jul 4 08:43:07.709: INFO: stderr: "" Jul 4 08:43:07.709: INFO: stdout: "" Jul 4 08:43:07.709: INFO: update-demo-nautilus-pr7zz is created but not running Jul 4 08:43:12.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4117' Jul 4 08:43:12.797: INFO: stderr: "" Jul 4 08:43:12.797: INFO: stdout: "update-demo-nautilus-pr7zz update-demo-nautilus-wm2rs " Jul 4 08:43:12.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pr7zz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4117' Jul 4 08:43:12.891: INFO: stderr: "" Jul 4 08:43:12.891: INFO: stdout: "" Jul 4 08:43:12.891: INFO: update-demo-nautilus-pr7zz is created but not running Jul 4 08:43:17.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4117' Jul 4 08:43:18.107: INFO: stderr: "" Jul 4 08:43:18.107: INFO: stdout: "update-demo-nautilus-pr7zz update-demo-nautilus-wm2rs " Jul 4 08:43:18.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pr7zz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4117' Jul 4 08:43:18.330: INFO: stderr: "" Jul 4 08:43:18.330: INFO: stdout: "" Jul 4 08:43:18.330: INFO: update-demo-nautilus-pr7zz is created but not running Jul 4 08:43:23.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4117' Jul 4 08:43:23.499: INFO: stderr: "" Jul 4 08:43:23.499: INFO: stdout: "update-demo-nautilus-pr7zz update-demo-nautilus-wm2rs " Jul 4 08:43:23.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pr7zz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4117' Jul 4 08:43:23.584: INFO: stderr: "" Jul 4 08:43:23.584: INFO: stdout: "" Jul 4 08:43:23.584: INFO: update-demo-nautilus-pr7zz is created but not running Jul 4 08:43:28.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4117' Jul 4 08:43:28.684: INFO: stderr: "" Jul 4 08:43:28.684: INFO: stdout: "update-demo-nautilus-pr7zz update-demo-nautilus-wm2rs " Jul 4 08:43:28.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pr7zz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4117' Jul 4 08:43:28.771: INFO: stderr: "" Jul 4 08:43:28.771: INFO: stdout: "true" Jul 4 08:43:28.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pr7zz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4117' Jul 4 08:43:28.857: INFO: stderr: "" Jul 4 08:43:28.857: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 4 08:43:28.857: INFO: validating pod update-demo-nautilus-pr7zz Jul 4 08:43:28.860: INFO: got data: { "image": "nautilus.jpg" } Jul 4 08:43:28.860: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 4 08:43:28.860: INFO: update-demo-nautilus-pr7zz is verified up and running Jul 4 08:43:28.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wm2rs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4117' Jul 4 08:43:28.956: INFO: stderr: "" Jul 4 08:43:28.956: INFO: stdout: "true" Jul 4 08:43:28.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wm2rs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4117' Jul 4 08:43:29.037: INFO: stderr: "" Jul 4 08:43:29.037: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 4 08:43:29.037: INFO: validating pod update-demo-nautilus-wm2rs Jul 4 08:43:29.040: INFO: got data: { "image": "nautilus.jpg" } Jul 4 08:43:29.040: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 4 08:43:29.040: INFO: update-demo-nautilus-wm2rs is verified up and running STEP: using delete to clean up resources Jul 4 08:43:29.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4117' Jul 4 08:43:29.134: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 4 08:43:29.134: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 4 08:43:29.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4117' Jul 4 08:43:29.217: INFO: stderr: "No resources found in kubectl-4117 namespace.\n" Jul 4 08:43:29.217: INFO: stdout: "" Jul 4 08:43:29.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4117 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 4 08:43:29.308: INFO: stderr: "" Jul 4 08:43:29.308: INFO: stdout: "update-demo-nautilus-pr7zz\nupdate-demo-nautilus-wm2rs\n" Jul 4 08:43:29.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4117' Jul 4 08:43:29.904: INFO: stderr: "No resources found in kubectl-4117 namespace.\n" Jul 4 08:43:29.904: INFO: stdout: "" Jul 4 08:43:29.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4117 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 4 08:43:29.999: INFO: stderr: "" Jul 4 08:43:29.999: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:43:29.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4117" for this suite. • [SLOW TEST:23.053 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":53,"skipped":879,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:43:30.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 4 08:43:30.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7244' Jul 4 08:43:30.233: INFO: stderr: "" Jul 4 08:43:30.233: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jul 4 08:43:35.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7244 -o json' Jul 4 08:43:35.397: INFO: stderr: "" Jul 4 08:43:35.397: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-07-04T08:43:30Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7244\",\n \"resourceVersion\": \"10570\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7244/pods/e2e-test-httpd-pod\",\n \"uid\": \"5589bb13-f301-4129-8fab-b0eedc1c3428\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-xhqsk\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-xhqsk\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-xhqsk\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-04T08:43:30Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-04T08:43:33Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-04T08:43:33Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-04T08:43:30Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://090d8a9d2ac2f59fbacf2c3c314029db44ce145c6549fbdd9ce7d9c33c13653c\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-07-04T08:43:32Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.25\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.25\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-07-04T08:43:30Z\"\n }\n}\n" STEP: replace the image in the pod Jul 4 08:43:35.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7244' Jul 4 08:43:35.616: INFO: stderr: "" Jul 4 08:43:35.616: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Jul 4 08:43:35.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7244' Jul 4 08:43:46.215: INFO: stderr: "" Jul 4 08:43:46.215: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:43:46.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7244" for this suite. • [SLOW TEST:16.230 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":54,"skipped":908,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:43:46.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-216/configmap-test-fef20a81-a3b2-41d5-a54f-db810be0c333 STEP: Creating a pod to test consume configMaps Jul 4 08:43:46.327: INFO: Waiting up to 5m0s for pod "pod-configmaps-cecd05d8-d1e7-4a97-9d7c-7482024b394f" in namespace "configmap-216" to be "success or failure" Jul 4 08:43:46.359: INFO: Pod "pod-configmaps-cecd05d8-d1e7-4a97-9d7c-7482024b394f": Phase="Pending", Reason="", readiness=false. Elapsed: 31.698021ms Jul 4 08:43:48.363: INFO: Pod "pod-configmaps-cecd05d8-d1e7-4a97-9d7c-7482024b394f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035862213s Jul 4 08:43:50.367: INFO: Pod "pod-configmaps-cecd05d8-d1e7-4a97-9d7c-7482024b394f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04015265s STEP: Saw pod success Jul 4 08:43:50.368: INFO: Pod "pod-configmaps-cecd05d8-d1e7-4a97-9d7c-7482024b394f" satisfied condition "success or failure" Jul 4 08:43:50.370: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-cecd05d8-d1e7-4a97-9d7c-7482024b394f container env-test: STEP: delete the pod Jul 4 08:43:50.404: INFO: Waiting for pod pod-configmaps-cecd05d8-d1e7-4a97-9d7c-7482024b394f to disappear Jul 4 08:43:50.422: INFO: Pod pod-configmaps-cecd05d8-d1e7-4a97-9d7c-7482024b394f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:43:50.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-216" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":924,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:43:50.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jul 4 08:43:50.526: INFO: Waiting up to 5m0s for pod "downward-api-e3165bce-daf5-494f-81ba-70fdc7417895" in namespace "downward-api-145" to be "success or failure" Jul 4 08:43:50.530: INFO: Pod "downward-api-e3165bce-daf5-494f-81ba-70fdc7417895": Phase="Pending", Reason="", readiness=false. Elapsed: 3.802211ms Jul 4 08:43:52.545: INFO: Pod "downward-api-e3165bce-daf5-494f-81ba-70fdc7417895": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019251717s Jul 4 08:43:54.550: INFO: Pod "downward-api-e3165bce-daf5-494f-81ba-70fdc7417895": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023534119s STEP: Saw pod success Jul 4 08:43:54.550: INFO: Pod "downward-api-e3165bce-daf5-494f-81ba-70fdc7417895" satisfied condition "success or failure" Jul 4 08:43:54.552: INFO: Trying to get logs from node jerma-worker pod downward-api-e3165bce-daf5-494f-81ba-70fdc7417895 container dapi-container: STEP: delete the pod Jul 4 08:43:54.624: INFO: Waiting for pod downward-api-e3165bce-daf5-494f-81ba-70fdc7417895 to disappear Jul 4 08:43:54.627: INFO: Pod downward-api-e3165bce-daf5-494f-81ba-70fdc7417895 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:43:54.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-145" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":967,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:43:54.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 4 08:44:05.080: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 4 08:44:05.110: INFO: Pod pod-with-poststart-exec-hook still exists Jul 4 08:44:07.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 4 08:44:07.115: INFO: Pod pod-with-poststart-exec-hook still exists Jul 4 08:44:09.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 4 08:44:09.115: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:44:09.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5874" for this suite. • [SLOW TEST:14.489 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":979,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:44:09.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3610.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3610.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 4 08:44:29.343: INFO: DNS probes using dns-3610/dns-test-4c9bf7c1-e7fa-4d20-ad60-2e1b45b7d16e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:44:29.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3610" for this suite. • [SLOW TEST:20.315 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":58,"skipped":997,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:44:29.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:44:29.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2794" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":59,"skipped":1007,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:44:29.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:45:30.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8039" for this suite. • [SLOW TEST:60.257 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":1019,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:45:30.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-da022cae-8079-4ced-8164-8e569c5f3e7d STEP: Creating a pod to test consume secrets Jul 4 08:45:30.144: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3ecfa5eb-010e-4629-b294-df529a8bd648" in namespace "projected-1483" to be "success or failure" Jul 4 08:45:30.152: INFO: Pod "pod-projected-secrets-3ecfa5eb-010e-4629-b294-df529a8bd648": Phase="Pending", Reason="", readiness=false. Elapsed: 8.38494ms Jul 4 08:45:32.156: INFO: Pod "pod-projected-secrets-3ecfa5eb-010e-4629-b294-df529a8bd648": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012119569s Jul 4 08:45:34.160: INFO: Pod "pod-projected-secrets-3ecfa5eb-010e-4629-b294-df529a8bd648": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016528571s STEP: Saw pod success Jul 4 08:45:34.160: INFO: Pod "pod-projected-secrets-3ecfa5eb-010e-4629-b294-df529a8bd648" satisfied condition "success or failure" Jul 4 08:45:34.163: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-3ecfa5eb-010e-4629-b294-df529a8bd648 container projected-secret-volume-test: STEP: delete the pod Jul 4 08:45:34.209: INFO: Waiting for pod pod-projected-secrets-3ecfa5eb-010e-4629-b294-df529a8bd648 to disappear Jul 4 08:45:34.218: INFO: Pod pod-projected-secrets-3ecfa5eb-010e-4629-b294-df529a8bd648 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:45:34.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1483" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":1025,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:45:34.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jul 4 08:45:34.288: INFO: >>> kubeConfig: /root/.kube/config Jul 4 08:45:37.219: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:45:46.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7750" for this suite. • [SLOW TEST:12.537 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":62,"skipped":1029,"failed":0} S ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:45:46.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jul 4 08:45:51.357: INFO: Successfully updated pod "adopt-release-qsn5v" STEP: Checking that the Job readopts the Pod Jul 4 08:45:51.357: INFO: Waiting up to 15m0s for pod "adopt-release-qsn5v" in namespace "job-8919" to be "adopted" Jul 4 08:45:51.379: INFO: Pod "adopt-release-qsn5v": Phase="Running", Reason="", readiness=true. Elapsed: 21.707603ms Jul 4 08:45:53.383: INFO: Pod "adopt-release-qsn5v": Phase="Running", Reason="", readiness=true. Elapsed: 2.025383577s Jul 4 08:45:53.383: INFO: Pod "adopt-release-qsn5v" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jul 4 08:45:53.893: INFO: Successfully updated pod "adopt-release-qsn5v" STEP: Checking that the Job releases the Pod Jul 4 08:45:53.893: INFO: Waiting up to 15m0s for pod "adopt-release-qsn5v" in namespace "job-8919" to be "released" Jul 4 08:45:53.918: INFO: Pod "adopt-release-qsn5v": Phase="Running", Reason="", readiness=true. Elapsed: 24.687297ms Jul 4 08:45:56.218: INFO: Pod "adopt-release-qsn5v": Phase="Running", Reason="", readiness=true. Elapsed: 2.324254629s Jul 4 08:45:56.218: INFO: Pod "adopt-release-qsn5v" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:45:56.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8919" for this suite. • [SLOW TEST:9.600 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":63,"skipped":1030,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:45:56.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jul 4 08:45:56.633: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 4 08:45:56.643: INFO: Waiting for terminating namespaces to be deleted... Jul 4 08:45:56.645: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jul 4 08:45:56.651: INFO: kindnet-gnxwn from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded) Jul 4 08:45:56.651: INFO: Container kindnet-cni ready: true, restart count 0 Jul 4 08:45:56.651: INFO: kube-proxy-8sp85 from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded) Jul 4 08:45:56.651: INFO: Container kube-proxy ready: true, restart count 0 Jul 4 08:45:56.651: INFO: adopt-release-qsn5v from job-8919 started at 2020-07-04 08:45:46 +0000 UTC (1 container statuses recorded) Jul 4 08:45:56.651: INFO: Container c ready: true, restart count 0 Jul 4 08:45:56.651: INFO: adopt-release-wjgwh from job-8919 started at 2020-07-04 08:45:46 +0000 UTC (1 container statuses recorded) Jul 4 08:45:56.651: INFO: Container c ready: true, restart count 0 Jul 4 08:45:56.651: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jul 4 08:45:56.672: INFO: kube-proxy-b2ncl from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded) Jul 4 08:45:56.672: INFO: Container kube-proxy ready: true, restart count 0 Jul 4 08:45:56.672: INFO: kindnet-qg8qr from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded) Jul 4 08:45:56.672: INFO: Container kindnet-cni ready: true, restart count 0 Jul 4 08:45:56.672: INFO: adopt-release-b7b7n from job-8919 started at 2020-07-04 08:45:54 +0000 UTC (1 container statuses recorded) Jul 4 08:45:56.672: INFO: Container c ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.161e8046189e3ccb], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:45:57.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1099" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":64,"skipped":1037,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:45:57.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 4 08:45:58.402: INFO: Waiting up to 5m0s for pod "pod-63bb16cd-e68c-4ab8-850f-c19245369d70" in namespace "emptydir-7959" to be "success or failure" Jul 4 08:45:58.554: INFO: Pod "pod-63bb16cd-e68c-4ab8-850f-c19245369d70": Phase="Pending", Reason="", readiness=false. Elapsed: 151.3229ms Jul 4 08:46:00.558: INFO: Pod "pod-63bb16cd-e68c-4ab8-850f-c19245369d70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155136992s Jul 4 08:46:02.674: INFO: Pod "pod-63bb16cd-e68c-4ab8-850f-c19245369d70": Phase="Running", Reason="", readiness=true. Elapsed: 4.271009577s Jul 4 08:46:04.677: INFO: Pod "pod-63bb16cd-e68c-4ab8-850f-c19245369d70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.274596362s STEP: Saw pod success Jul 4 08:46:04.677: INFO: Pod "pod-63bb16cd-e68c-4ab8-850f-c19245369d70" satisfied condition "success or failure" Jul 4 08:46:04.680: INFO: Trying to get logs from node jerma-worker2 pod pod-63bb16cd-e68c-4ab8-850f-c19245369d70 container test-container: STEP: delete the pod Jul 4 08:46:04.836: INFO: Waiting for pod pod-63bb16cd-e68c-4ab8-850f-c19245369d70 to disappear Jul 4 08:46:04.875: INFO: Pod pod-63bb16cd-e68c-4ab8-850f-c19245369d70 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:46:04.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7959" for this suite. • [SLOW TEST:7.171 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1047,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:46:04.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 4 08:46:05.050: INFO: Waiting up to 5m0s for pod "downwardapi-volume-34f7800b-6974-4eb7-b956-744b475b3f69" in namespace "downward-api-1911" to be "success or failure" Jul 4 08:46:05.135: INFO: Pod "downwardapi-volume-34f7800b-6974-4eb7-b956-744b475b3f69": Phase="Pending", Reason="", readiness=false. Elapsed: 84.716345ms Jul 4 08:46:07.139: INFO: Pod "downwardapi-volume-34f7800b-6974-4eb7-b956-744b475b3f69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088936426s Jul 4 08:46:09.142: INFO: Pod "downwardapi-volume-34f7800b-6974-4eb7-b956-744b475b3f69": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092082124s Jul 4 08:46:11.145: INFO: Pod "downwardapi-volume-34f7800b-6974-4eb7-b956-744b475b3f69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.095476781s STEP: Saw pod success Jul 4 08:46:11.145: INFO: Pod "downwardapi-volume-34f7800b-6974-4eb7-b956-744b475b3f69" satisfied condition "success or failure" Jul 4 08:46:11.148: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-34f7800b-6974-4eb7-b956-744b475b3f69 container client-container: STEP: delete the pod Jul 4 08:46:11.167: INFO: Waiting for pod downwardapi-volume-34f7800b-6974-4eb7-b956-744b475b3f69 to disappear Jul 4 08:46:11.173: INFO: Pod downwardapi-volume-34f7800b-6974-4eb7-b956-744b475b3f69 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:46:11.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1911" for this suite. • [SLOW TEST:6.296 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1075,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:46:11.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-d7bddd00-689c-4632-9bff-9b5841320d90 STEP: Creating a pod to test consume configMaps Jul 4 08:46:11.751: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1d91c59d-73bb-4a4c-8232-0c88036607b3" in namespace "projected-3420" to be "success or failure" Jul 4 08:46:11.871: INFO: Pod "pod-projected-configmaps-1d91c59d-73bb-4a4c-8232-0c88036607b3": Phase="Pending", Reason="", readiness=false. Elapsed: 120.008986ms Jul 4 08:46:13.875: INFO: Pod "pod-projected-configmaps-1d91c59d-73bb-4a4c-8232-0c88036607b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12416892s Jul 4 08:46:15.879: INFO: Pod "pod-projected-configmaps-1d91c59d-73bb-4a4c-8232-0c88036607b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.127823244s STEP: Saw pod success Jul 4 08:46:15.879: INFO: Pod "pod-projected-configmaps-1d91c59d-73bb-4a4c-8232-0c88036607b3" satisfied condition "success or failure" Jul 4 08:46:15.881: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-1d91c59d-73bb-4a4c-8232-0c88036607b3 container projected-configmap-volume-test: STEP: delete the pod Jul 4 08:46:15.898: INFO: Waiting for pod pod-projected-configmaps-1d91c59d-73bb-4a4c-8232-0c88036607b3 to disappear Jul 4 08:46:15.924: INFO: Pod pod-projected-configmaps-1d91c59d-73bb-4a4c-8232-0c88036607b3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:46:15.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3420" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1093,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:46:15.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 4 08:46:16.004: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59810a76-e4e2-4257-ac78-bf23f11e53d4" in namespace "projected-4767" to be "success or failure" Jul 4 08:46:16.010: INFO: Pod "downwardapi-volume-59810a76-e4e2-4257-ac78-bf23f11e53d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.641475ms Jul 4 08:46:18.013: INFO: Pod "downwardapi-volume-59810a76-e4e2-4257-ac78-bf23f11e53d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009597924s Jul 4 08:46:20.018: INFO: Pod "downwardapi-volume-59810a76-e4e2-4257-ac78-bf23f11e53d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013860868s STEP: Saw pod success Jul 4 08:46:20.018: INFO: Pod "downwardapi-volume-59810a76-e4e2-4257-ac78-bf23f11e53d4" satisfied condition "success or failure" Jul 4 08:46:20.021: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-59810a76-e4e2-4257-ac78-bf23f11e53d4 container client-container: STEP: delete the pod Jul 4 08:46:20.041: INFO: Waiting for pod downwardapi-volume-59810a76-e4e2-4257-ac78-bf23f11e53d4 to disappear Jul 4 08:46:20.069: INFO: Pod downwardapi-volume-59810a76-e4e2-4257-ac78-bf23f11e53d4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:46:20.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4767" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1105,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:46:20.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jul 4 08:46:20.148: INFO: namespace kubectl-8340 Jul 4 08:46:20.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8340' Jul 4 08:46:20.454: INFO: stderr: "" Jul 4 08:46:20.454: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jul 4 08:46:21.459: INFO: Selector matched 1 pods for map[app:agnhost] Jul 4 08:46:21.459: INFO: Found 0 / 1 Jul 4 08:46:22.626: INFO: Selector matched 1 pods for map[app:agnhost] Jul 4 08:46:22.626: INFO: Found 0 / 1 Jul 4 08:46:23.458: INFO: Selector matched 1 pods for map[app:agnhost] Jul 4 08:46:23.458: INFO: Found 0 / 1 Jul 4 08:46:24.458: INFO: Selector matched 1 pods for map[app:agnhost] Jul 4 08:46:24.458: INFO: Found 1 / 1 Jul 4 08:46:24.458: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 4 08:46:24.462: INFO: Selector matched 1 pods for map[app:agnhost] Jul 4 08:46:24.462: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 4 08:46:24.462: INFO: wait on agnhost-master startup in kubectl-8340 Jul 4 08:46:24.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-829mc agnhost-master --namespace=kubectl-8340' Jul 4 08:46:24.573: INFO: stderr: "" Jul 4 08:46:24.573: INFO: stdout: "Paused\n" STEP: exposing RC Jul 4 08:46:24.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8340' Jul 4 08:46:24.866: INFO: stderr: "" Jul 4 08:46:24.866: INFO: stdout: "service/rm2 exposed\n" Jul 4 08:46:25.064: INFO: Service rm2 in namespace kubectl-8340 found. STEP: exposing service Jul 4 08:46:27.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8340' Jul 4 08:46:27.246: INFO: stderr: "" Jul 4 08:46:27.246: INFO: stdout: "service/rm3 exposed\n" Jul 4 08:46:27.320: INFO: Service rm3 in namespace kubectl-8340 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:46:29.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8340" for this suite. • [SLOW TEST:9.259 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":69,"skipped":1106,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:46:29.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jul 4 08:46:29.426: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 4 08:46:29.434: INFO: Waiting for terminating namespaces to be deleted... Jul 4 08:46:29.436: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jul 4 08:46:29.440: INFO: kindnet-gnxwn from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded) Jul 4 08:46:29.440: INFO: Container kindnet-cni ready: true, restart count 0 Jul 4 08:46:29.440: INFO: kube-proxy-8sp85 from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded) Jul 4 08:46:29.440: INFO: Container kube-proxy ready: true, restart count 0 Jul 4 08:46:29.440: INFO: adopt-release-qsn5v from job-8919 started at 2020-07-04 08:45:46 +0000 UTC (1 container statuses recorded) Jul 4 08:46:29.440: INFO: Container c ready: true, restart count 0 Jul 4 08:46:29.440: INFO: adopt-release-wjgwh from job-8919 started at 2020-07-04 08:45:46 +0000 UTC (1 container statuses recorded) Jul 4 08:46:29.440: INFO: Container c ready: true, restart count 0 Jul 4 08:46:29.440: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jul 4 08:46:29.445: INFO: kube-proxy-b2ncl from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded) Jul 4 08:46:29.445: INFO: Container kube-proxy ready: true, restart count 0 Jul 4 08:46:29.445: INFO: kindnet-qg8qr from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded) Jul 4 08:46:29.445: INFO: Container kindnet-cni ready: true, restart count 0 Jul 4 08:46:29.445: INFO: adopt-release-b7b7n from job-8919 started at 2020-07-04 08:45:54 +0000 UTC (1 container statuses recorded) Jul 4 08:46:29.445: INFO: Container c ready: true, restart count 0 Jul 4 08:46:29.445: INFO: agnhost-master-829mc from kubectl-8340 started at 2020-07-04 08:46:20 +0000 UTC (1 container statuses recorded) Jul 4 08:46:29.445: INFO: Container agnhost-master ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-66676659-1621-4a3e-b04c-c393717e4d57 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-66676659-1621-4a3e-b04c-c393717e4d57 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-66676659-1621-4a3e-b04c-c393717e4d57 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:46:51.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5935" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:22.673 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":70,"skipped":1106,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:46:52.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-af21f5a5-8473-4898-b08a-d22f8ade9416 STEP: Creating secret with name s-test-opt-upd-d4db8c67-1ac7-4bd8-84fc-68637ff2e948 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-af21f5a5-8473-4898-b08a-d22f8ade9416 STEP: Updating secret s-test-opt-upd-d4db8c67-1ac7-4bd8-84fc-68637ff2e948 STEP: Creating secret with name s-test-opt-create-f8057e8a-7d3b-459d-9aa8-fd8a5b41cf09 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:48:09.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5780" for this suite. • [SLOW TEST:77.157 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1124,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:48:09.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 4 08:48:09.285: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1088aedf-98a1-4533-845d-9061b4ed5b19" in namespace "downward-api-8534" to be "success or failure" Jul 4 08:48:09.289: INFO: Pod "downwardapi-volume-1088aedf-98a1-4533-845d-9061b4ed5b19": Phase="Pending", Reason="", readiness=false. Elapsed: 3.107915ms Jul 4 08:48:11.293: INFO: Pod "downwardapi-volume-1088aedf-98a1-4533-845d-9061b4ed5b19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007563866s Jul 4 08:48:13.297: INFO: Pod "downwardapi-volume-1088aedf-98a1-4533-845d-9061b4ed5b19": Phase="Running", Reason="", readiness=true. Elapsed: 4.012053189s Jul 4 08:48:15.302: INFO: Pod "downwardapi-volume-1088aedf-98a1-4533-845d-9061b4ed5b19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016236591s STEP: Saw pod success Jul 4 08:48:15.302: INFO: Pod "downwardapi-volume-1088aedf-98a1-4533-845d-9061b4ed5b19" satisfied condition "success or failure" Jul 4 08:48:15.304: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1088aedf-98a1-4533-845d-9061b4ed5b19 container client-container: STEP: delete the pod Jul 4 08:48:15.370: INFO: Waiting for pod downwardapi-volume-1088aedf-98a1-4533-845d-9061b4ed5b19 to disappear Jul 4 08:48:15.385: INFO: Pod downwardapi-volume-1088aedf-98a1-4533-845d-9061b4ed5b19 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:48:15.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8534" for this suite. • [SLOW TEST:6.226 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1131,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:48:15.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jul 4 08:48:15.466: INFO: PodSpec: initContainers in spec.initContainers Jul 4 08:49:01.488: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-52c72c1b-81a1-4aab-a088-427a23aca46a", GenerateName:"", Namespace:"init-container-6809", SelfLink:"/api/v1/namespaces/init-container-6809/pods/pod-init-52c72c1b-81a1-4aab-a088-427a23aca46a", UID:"f7ce3200-530d-4283-9200-37ca09b6501b", ResourceVersion:"12185", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729449295, loc:(*time.Location)(0x78f7140)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"466030794"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-z7klj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0015fc900), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-z7klj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-z7klj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-z7klj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002f6d658), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00204ec60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f6d6e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f6d700)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002f6d708), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002f6d70c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729449295, loc:(*time.Location)(0x78f7140)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729449295, loc:(*time.Location)(0x78f7140)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729449295, loc:(*time.Location)(0x78f7140)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729449295, loc:(*time.Location)(0x78f7140)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.47", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.47"}}, StartTime:(*v1.Time)(0xc001645560), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0016a2700)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0016a27e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://8c9d54f19a4f2d74071889c64457234d7519a101d5d90458cbb16ca8d8a5a659", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0016455e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0016455a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002f6d78f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:49:01.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6809" for this suite. • [SLOW TEST:46.163 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":73,"skipped":1134,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:49:01.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-fbd6d912-f8bd-4540-9031-f1693030002d STEP: Creating a pod to test consume secrets Jul 4 08:49:01.687: INFO: Waiting up to 5m0s for pod "pod-secrets-33761e39-81c9-4656-a0b2-3eb0c7a1aef1" in namespace "secrets-4105" to be "success or failure" Jul 4 08:49:01.692: INFO: Pod "pod-secrets-33761e39-81c9-4656-a0b2-3eb0c7a1aef1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.360595ms Jul 4 08:49:03.706: INFO: Pod "pod-secrets-33761e39-81c9-4656-a0b2-3eb0c7a1aef1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018154001s Jul 4 08:49:05.710: INFO: Pod "pod-secrets-33761e39-81c9-4656-a0b2-3eb0c7a1aef1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022571531s STEP: Saw pod success Jul 4 08:49:05.710: INFO: Pod "pod-secrets-33761e39-81c9-4656-a0b2-3eb0c7a1aef1" satisfied condition "success or failure" Jul 4 08:49:05.713: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-33761e39-81c9-4656-a0b2-3eb0c7a1aef1 container secret-volume-test: STEP: delete the pod Jul 4 08:49:05.857: INFO: Waiting for pod pod-secrets-33761e39-81c9-4656-a0b2-3eb0c7a1aef1 to disappear Jul 4 08:49:05.990: INFO: Pod pod-secrets-33761e39-81c9-4656-a0b2-3eb0c7a1aef1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:49:05.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4105" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1139,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:49:06.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 4 08:49:10.112: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:49:10.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5832" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1160,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:49:10.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6822, will wait for the garbage collector to delete the pods Jul 4 08:49:16.623: INFO: Deleting Job.batch foo took: 35.082565ms Jul 4 08:49:16.923: INFO: Terminating Job.batch foo pods took: 300.319287ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:49:56.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6822" for this suite. • [SLOW TEST:45.907 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":76,"skipped":1209,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:49:56.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-wwl5 STEP: Creating a pod to test atomic-volume-subpath Jul 4 08:49:56.438: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-wwl5" in namespace "subpath-3972" to be "success or failure" Jul 4 08:49:56.458: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Pending", Reason="", readiness=false. Elapsed: 19.827742ms Jul 4 08:49:58.617: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178951972s Jul 4 08:50:00.621: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 4.18270258s Jul 4 08:50:02.625: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 6.186717045s Jul 4 08:50:04.630: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 8.191183056s Jul 4 08:50:06.639: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 10.200608958s Jul 4 08:50:08.643: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 12.204521433s Jul 4 08:50:10.646: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 14.207848752s Jul 4 08:50:12.650: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 16.211703657s Jul 4 08:50:14.655: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 18.216765322s Jul 4 08:50:16.659: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 20.220999189s Jul 4 08:50:18.663: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 22.224561192s Jul 4 08:50:20.701: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Running", Reason="", readiness=true. Elapsed: 24.262585637s Jul 4 08:50:22.706: INFO: Pod "pod-subpath-test-projected-wwl5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.267007718s STEP: Saw pod success Jul 4 08:50:22.706: INFO: Pod "pod-subpath-test-projected-wwl5" satisfied condition "success or failure" Jul 4 08:50:22.708: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-wwl5 container test-container-subpath-projected-wwl5: STEP: delete the pod Jul 4 08:50:23.051: INFO: Waiting for pod pod-subpath-test-projected-wwl5 to disappear Jul 4 08:50:23.058: INFO: Pod pod-subpath-test-projected-wwl5 no longer exists STEP: Deleting pod pod-subpath-test-projected-wwl5 Jul 4 08:50:23.058: INFO: Deleting pod "pod-subpath-test-projected-wwl5" in namespace "subpath-3972" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:50:23.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3972" for this suite. • [SLOW TEST:26.848 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":77,"skipped":1220,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:50:23.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jul 4 08:50:29.647: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7295 PodName:pod-sharedvolume-23b2afc7-124f-4a6d-95c5-1b23e7ba98a1 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 4 08:50:29.647: INFO: >>> kubeConfig: /root/.kube/config I0704 08:50:29.683232 6 log.go:172] (0xc004490210) (0xc001b12f00) Create stream I0704 08:50:29.683263 6 log.go:172] (0xc004490210) (0xc001b12f00) Stream added, broadcasting: 1 I0704 08:50:29.685524 6 log.go:172] (0xc004490210) Reply frame received for 1 I0704 08:50:29.685571 6 log.go:172] (0xc004490210) (0xc001d8e140) Create stream I0704 08:50:29.685592 6 log.go:172] (0xc004490210) (0xc001d8e140) Stream added, broadcasting: 3 I0704 08:50:29.686404 6 log.go:172] (0xc004490210) Reply frame received for 3 I0704 08:50:29.686424 6 log.go:172] (0xc004490210) (0xc001b12fa0) Create stream I0704 08:50:29.686433 6 log.go:172] (0xc004490210) (0xc001b12fa0) Stream added, broadcasting: 5 I0704 08:50:29.687193 6 log.go:172] (0xc004490210) Reply frame received for 5 I0704 08:50:29.756055 6 log.go:172] (0xc004490210) Data frame received for 3 I0704 08:50:29.756089 6 log.go:172] (0xc001d8e140) (3) Data frame handling I0704 08:50:29.756098 6 log.go:172] (0xc001d8e140) (3) Data frame sent I0704 08:50:29.756107 6 log.go:172] (0xc004490210) Data frame received for 3 I0704 08:50:29.756124 6 log.go:172] (0xc001d8e140) (3) Data frame handling I0704 08:50:29.756164 6 log.go:172] (0xc004490210) Data frame received for 5 I0704 08:50:29.756205 6 log.go:172] (0xc001b12fa0) (5) Data frame handling I0704 08:50:29.757610 6 log.go:172] (0xc004490210) Data frame received for 1 I0704 08:50:29.757631 6 log.go:172] (0xc001b12f00) (1) Data frame handling I0704 08:50:29.757640 6 log.go:172] (0xc001b12f00) (1) Data frame sent I0704 08:50:29.757653 6 log.go:172] (0xc004490210) (0xc001b12f00) Stream removed, broadcasting: 1 I0704 08:50:29.757673 6 log.go:172] (0xc004490210) Go away received I0704 08:50:29.757760 6 log.go:172] (0xc004490210) (0xc001b12f00) Stream removed, broadcasting: 1 I0704 08:50:29.757780 6 log.go:172] (0xc004490210) (0xc001d8e140) Stream removed, broadcasting: 3 I0704 08:50:29.757790 6 log.go:172] (0xc004490210) (0xc001b12fa0) Stream removed, broadcasting: 5 Jul 4 08:50:29.757: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:50:29.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7295" for this suite. • [SLOW TEST:6.578 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":78,"skipped":1274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:50:29.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jul 4 08:50:36.497: INFO: Successfully updated pod "labelsupdate28e89a89-f33e-4cde-b7d9-7661a119c1b0" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:50:38.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3335" for this suite. • [SLOW TEST:8.779 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1301,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:50:38.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-6a68ec0c-5587-4a17-80b9-8fff89275f09 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:50:46.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8168" for this suite. • [SLOW TEST:7.463 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1307,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:50:46.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jul 4 08:50:46.163: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2805 /api/v1/namespaces/watch-2805/configmaps/e2e-watch-test-resource-version 7543c0e5-cd9a-4d1d-8eca-746fc39e525d 12704 0 2020-07-04 08:50:46 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 4 08:50:46.163: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2805 /api/v1/namespaces/watch-2805/configmaps/e2e-watch-test-resource-version 7543c0e5-cd9a-4d1d-8eca-746fc39e525d 12705 0 2020-07-04 08:50:46 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 4 08:50:46.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2805" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":81,"skipped":1360,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 4 08:50:46.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 4 08:50:46.272: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jul 4 08:50:49.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7522 create -f -' Jul 4 08:50:53.488: INFO: stderr: "" Jul 4 08:50:53.488: INFO: stdout: "e2e-test-crd-publish-openapi-963-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jul 4 08:50:53.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7522 delete e2e-test-crd-publish-openapi-963-crds test-cr' Jul 4 08:50:53.626: INFO: stderr: "" Jul 4 08:50:53.626: INFO: stdout: "e2e-test-crd-publish-openapi-963-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jul 4 08:50:53.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7522 apply -f -' Jul 4 08:50:53.883: INFO: stderr: "" Jul 4 08:50:53.884: INFO: stdout: "e2e-test-crd-publish-openapi-963-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jul 4 08:50:53.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7522 delete e2e-test-crd-publish-openapi-963-crds test-cr' Jul 4 08:50:54.024: INFO: stderr: "" Jul 4 08:50:54.024: INFO: stdout: "e2e-test-crd-publish-openapi-963-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jul 4 08:50:54.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-963-crds' Jul 4 08:50:54.255: INFO: stderr: "" Jul 4 08:50:54.255: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-963-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t \n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t \n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t