I0530 21:09:12.285653 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0530 21:09:12.285887 6 e2e.go:109] Starting e2e run "3478e8b2-f39c-4d40-993e-0dbc31ec855d" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1590872951 - Will randomize all specs Will run 278 of 4842 specs May 30 21:09:12.356: INFO: >>> kubeConfig: /root/.kube/config May 30 21:09:12.360: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 30 21:09:12.384: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 30 21:09:12.415: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 30 21:09:12.415: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 30 21:09:12.415: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 30 21:09:12.422: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 30 21:09:12.422: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 30 21:09:12.422: INFO: e2e test version: v1.17.4 May 30 21:09:12.424: INFO: kube-apiserver version: v1.17.2 May 30 21:09:12.424: INFO: >>> kubeConfig: /root/.kube/config May 30 21:09:12.429: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:09:12.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns May 30 21:09:12.545: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6836.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6836.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6836.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6836.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6836.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6836.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6836.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6836.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6836.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6836.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6836.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 142.96.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.96.142_udp@PTR;check="$$(dig +tcp +noall +answer +search 142.96.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.96.142_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6836.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6836.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6836.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6836.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6836.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6836.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6836.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6836.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6836.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6836.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6836.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 142.96.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.96.142_udp@PTR;check="$$(dig +tcp +noall +answer +search 142.96.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.96.142_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 30 21:09:18.627: INFO: Unable to read wheezy_udp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:18.630: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:18.633: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:18.635: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:18.655: INFO: Unable to read jessie_udp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:18.658: INFO: Unable to read jessie_tcp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:18.660: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:18.662: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:18.676: INFO: Lookups using dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b failed for: [wheezy_udp@dns-test-service.dns-6836.svc.cluster.local wheezy_tcp@dns-test-service.dns-6836.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local jessie_udp@dns-test-service.dns-6836.svc.cluster.local jessie_tcp@dns-test-service.dns-6836.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local] May 30 21:09:23.681: INFO: Unable to read wheezy_udp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:23.685: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:23.689: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:23.692: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:23.715: INFO: Unable to read jessie_udp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:23.718: INFO: Unable to read jessie_tcp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:23.721: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:23.724: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:23.746: INFO: Lookups using dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b failed for: [wheezy_udp@dns-test-service.dns-6836.svc.cluster.local wheezy_tcp@dns-test-service.dns-6836.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local jessie_udp@dns-test-service.dns-6836.svc.cluster.local jessie_tcp@dns-test-service.dns-6836.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local] May 30 21:09:28.682: INFO: Unable to read wheezy_udp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:28.686: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:28.689: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:28.693: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:28.716: INFO: Unable to read jessie_udp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:28.719: INFO: Unable to read jessie_tcp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:28.722: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:28.726: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:28.746: INFO: Lookups using dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b failed for: [wheezy_udp@dns-test-service.dns-6836.svc.cluster.local wheezy_tcp@dns-test-service.dns-6836.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local jessie_udp@dns-test-service.dns-6836.svc.cluster.local jessie_tcp@dns-test-service.dns-6836.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local] May 30 21:09:33.706: INFO: Unable to read wheezy_udp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:33.710: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:33.714: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:33.716: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:33.738: INFO: Unable to read jessie_udp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:33.741: INFO: Unable to read jessie_tcp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:33.744: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:33.747: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:33.768: INFO: Lookups using dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b failed for: [wheezy_udp@dns-test-service.dns-6836.svc.cluster.local wheezy_tcp@dns-test-service.dns-6836.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local jessie_udp@dns-test-service.dns-6836.svc.cluster.local jessie_tcp@dns-test-service.dns-6836.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local] May 30 21:09:38.680: INFO: Unable to read wheezy_udp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:38.683: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:38.685: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:38.688: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:38.706: INFO: Unable to read jessie_udp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:38.710: INFO: Unable to read jessie_tcp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:38.713: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:38.716: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:38.736: INFO: Lookups using dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b failed for: [wheezy_udp@dns-test-service.dns-6836.svc.cluster.local wheezy_tcp@dns-test-service.dns-6836.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local jessie_udp@dns-test-service.dns-6836.svc.cluster.local jessie_tcp@dns-test-service.dns-6836.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local] May 30 21:09:43.682: INFO: Unable to read wheezy_udp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:43.684: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:43.687: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:43.690: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:43.709: INFO: Unable to read jessie_udp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:43.713: INFO: Unable to read jessie_tcp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:43.716: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:43.719: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:43.754: INFO: Lookups using dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b failed for: [wheezy_udp@dns-test-service.dns-6836.svc.cluster.local wheezy_tcp@dns-test-service.dns-6836.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local jessie_udp@dns-test-service.dns-6836.svc.cluster.local jessie_tcp@dns-test-service.dns-6836.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6836.svc.cluster.local] May 30 21:09:48.682: INFO: Unable to read wheezy_udp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:48.686: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:48.716: INFO: Unable to read jessie_udp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:48.720: INFO: Unable to read jessie_tcp@dns-test-service.dns-6836.svc.cluster.local from pod dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b: the server could not find the requested resource (get pods dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b) May 30 21:09:48.745: INFO: Lookups using dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b failed for: [wheezy_udp@dns-test-service.dns-6836.svc.cluster.local wheezy_tcp@dns-test-service.dns-6836.svc.cluster.local jessie_udp@dns-test-service.dns-6836.svc.cluster.local jessie_tcp@dns-test-service.dns-6836.svc.cluster.local] May 30 21:09:53.745: INFO: DNS probes using dns-6836/dns-test-d03d5eed-35a9-4db9-ab33-60d687ac213b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:09:53.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6836" for this suite. • [SLOW TEST:41.963 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":1,"skipped":56,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:09:54.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-521b3390-7637-482e-b168-b48b190bbb47 STEP: Creating a pod to test consume secrets May 30 21:09:54.735: INFO: Waiting up to 5m0s for pod "pod-secrets-720e7407-4d78-49d6-b34a-07ade53f033a" in namespace "secrets-5281" to be "success or failure" May 30 21:09:54.751: INFO: Pod "pod-secrets-720e7407-4d78-49d6-b34a-07ade53f033a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.254843ms May 30 21:09:56.756: INFO: Pod "pod-secrets-720e7407-4d78-49d6-b34a-07ade53f033a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020956097s May 30 21:09:58.759: INFO: Pod "pod-secrets-720e7407-4d78-49d6-b34a-07ade53f033a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024276606s STEP: Saw pod success May 30 21:09:58.759: INFO: Pod "pod-secrets-720e7407-4d78-49d6-b34a-07ade53f033a" satisfied condition "success or failure" May 30 21:09:58.762: INFO: Trying to get logs from node jerma-worker pod pod-secrets-720e7407-4d78-49d6-b34a-07ade53f033a container secret-volume-test: STEP: delete the pod May 30 21:09:58.869: INFO: Waiting for pod pod-secrets-720e7407-4d78-49d6-b34a-07ade53f033a to disappear May 30 21:09:58.882: INFO: Pod pod-secrets-720e7407-4d78-49d6-b34a-07ade53f033a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:09:58.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5281" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":81,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:09:58.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 30 21:09:58.931: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61e9ec75-438f-46b9-84fd-360145a00a01" in namespace "projected-624" to be "success or failure" May 30 21:09:58.936: INFO: Pod "downwardapi-volume-61e9ec75-438f-46b9-84fd-360145a00a01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.27081ms May 30 21:10:00.940: INFO: Pod "downwardapi-volume-61e9ec75-438f-46b9-84fd-360145a00a01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008576575s May 30 21:10:02.944: INFO: Pod "downwardapi-volume-61e9ec75-438f-46b9-84fd-360145a00a01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012612505s STEP: Saw pod success May 30 21:10:02.944: INFO: Pod "downwardapi-volume-61e9ec75-438f-46b9-84fd-360145a00a01" satisfied condition "success or failure" May 30 21:10:02.947: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-61e9ec75-438f-46b9-84fd-360145a00a01 container client-container: STEP: delete the pod May 30 21:10:02.994: INFO: Waiting for pod downwardapi-volume-61e9ec75-438f-46b9-84fd-360145a00a01 to disappear May 30 21:10:03.017: INFO: Pod downwardapi-volume-61e9ec75-438f-46b9-84fd-360145a00a01 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:10:03.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-624" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":83,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:10:03.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4073 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4073 I0530 21:10:03.274281 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4073, replica count: 2 I0530 21:10:06.324888 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 21:10:09.325105 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 30 21:10:09.325: INFO: Creating new exec pod May 30 21:10:14.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4073 execpods9tpj -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 30 21:10:17.033: INFO: stderr: "I0530 21:10:16.881322 27 log.go:172] (0xc0006431e0) (0xc0006abea0) Create stream\nI0530 21:10:16.881405 27 log.go:172] (0xc0006431e0) (0xc0006abea0) Stream added, broadcasting: 1\nI0530 21:10:16.884741 27 log.go:172] (0xc0006431e0) Reply frame received for 1\nI0530 21:10:16.884932 27 log.go:172] (0xc0006431e0) (0xc0005966e0) Create stream\nI0530 21:10:16.884944 27 log.go:172] (0xc0006431e0) (0xc0005966e0) Stream added, broadcasting: 3\nI0530 21:10:16.886003 27 log.go:172] (0xc0006431e0) Reply frame received for 3\nI0530 21:10:16.886043 27 log.go:172] (0xc0006431e0) (0xc0002bb4a0) Create stream\nI0530 21:10:16.886055 27 log.go:172] (0xc0006431e0) (0xc0002bb4a0) Stream added, broadcasting: 5\nI0530 21:10:16.887102 27 log.go:172] (0xc0006431e0) Reply frame received for 5\nI0530 21:10:17.016366 27 log.go:172] (0xc0006431e0) Data frame received for 5\nI0530 21:10:17.016392 27 log.go:172] (0xc0002bb4a0) (5) Data frame handling\nI0530 21:10:17.016409 27 log.go:172] (0xc0002bb4a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0530 21:10:17.024118 27 log.go:172] (0xc0006431e0) Data frame received for 5\nI0530 21:10:17.024149 27 log.go:172] (0xc0002bb4a0) (5) Data frame handling\nI0530 21:10:17.024162 27 log.go:172] (0xc0002bb4a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0530 21:10:17.024430 27 log.go:172] (0xc0006431e0) Data frame received for 5\nI0530 21:10:17.024446 27 log.go:172] (0xc0002bb4a0) (5) Data frame handling\nI0530 21:10:17.024485 27 log.go:172] (0xc0006431e0) Data frame received for 3\nI0530 21:10:17.024518 27 log.go:172] (0xc0005966e0) (3) Data frame handling\nI0530 21:10:17.026299 27 log.go:172] (0xc0006431e0) Data frame received for 1\nI0530 21:10:17.026320 27 log.go:172] (0xc0006abea0) (1) Data frame handling\nI0530 21:10:17.026336 27 log.go:172] (0xc0006abea0) (1) Data frame sent\nI0530 21:10:17.026358 27 log.go:172] (0xc0006431e0) (0xc0006abea0) Stream removed, broadcasting: 1\nI0530 21:10:17.026386 27 log.go:172] (0xc0006431e0) Go away received\nI0530 21:10:17.026708 27 log.go:172] (0xc0006431e0) (0xc0006abea0) Stream removed, broadcasting: 1\nI0530 21:10:17.026722 27 log.go:172] (0xc0006431e0) (0xc0005966e0) Stream removed, broadcasting: 3\nI0530 21:10:17.026730 27 log.go:172] (0xc0006431e0) (0xc0002bb4a0) Stream removed, broadcasting: 5\n" May 30 21:10:17.033: INFO: stdout: "" May 30 21:10:17.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4073 execpods9tpj -- /bin/sh -x -c nc -zv -t -w 2 10.103.36.219 80' May 30 21:10:17.261: INFO: stderr: "I0530 21:10:17.176124 58 log.go:172] (0xc000594790) (0xc0006be820) Create stream\nI0530 21:10:17.176178 58 log.go:172] (0xc000594790) (0xc0006be820) Stream added, broadcasting: 1\nI0530 21:10:17.179248 58 log.go:172] (0xc000594790) Reply frame received for 1\nI0530 21:10:17.179321 58 log.go:172] (0xc000594790) (0xc00074f5e0) Create stream\nI0530 21:10:17.179348 58 log.go:172] (0xc000594790) (0xc00074f5e0) Stream added, broadcasting: 3\nI0530 21:10:17.180169 58 log.go:172] (0xc000594790) Reply frame received for 3\nI0530 21:10:17.180212 58 log.go:172] (0xc000594790) (0xc000a24000) Create stream\nI0530 21:10:17.180241 58 log.go:172] (0xc000594790) (0xc000a24000) Stream added, broadcasting: 5\nI0530 21:10:17.181284 58 log.go:172] (0xc000594790) Reply frame received for 5\nI0530 21:10:17.254361 58 log.go:172] (0xc000594790) Data frame received for 5\nI0530 21:10:17.254395 58 log.go:172] (0xc000a24000) (5) Data frame handling\nI0530 21:10:17.254416 58 log.go:172] (0xc000a24000) (5) Data frame sent\n+ nc -zv -t -w 2 10.103.36.219 80\nConnection to 10.103.36.219 80 port [tcp/http] succeeded!\nI0530 21:10:17.254499 58 log.go:172] (0xc000594790) Data frame received for 3\nI0530 21:10:17.254528 58 log.go:172] (0xc000594790) Data frame received for 5\nI0530 21:10:17.254558 58 log.go:172] (0xc000a24000) (5) Data frame handling\nI0530 21:10:17.254580 58 log.go:172] (0xc00074f5e0) (3) Data frame handling\nI0530 21:10:17.255454 58 log.go:172] (0xc000594790) Data frame received for 1\nI0530 21:10:17.255472 58 log.go:172] (0xc0006be820) (1) Data frame handling\nI0530 21:10:17.255489 58 log.go:172] (0xc0006be820) (1) Data frame sent\nI0530 21:10:17.255502 58 log.go:172] (0xc000594790) (0xc0006be820) Stream removed, broadcasting: 1\nI0530 21:10:17.255569 58 log.go:172] (0xc000594790) Go away received\nI0530 21:10:17.255833 58 log.go:172] (0xc000594790) (0xc0006be820) Stream removed, broadcasting: 1\nI0530 21:10:17.255848 58 log.go:172] (0xc000594790) (0xc00074f5e0) Stream removed, broadcasting: 3\nI0530 21:10:17.255858 58 log.go:172] (0xc000594790) (0xc000a24000) Stream removed, broadcasting: 5\n" May 30 21:10:17.261: INFO: stdout: "" May 30 21:10:17.261: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:10:17.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4073" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.316 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":4,"skipped":112,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:10:17.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-f5a3e88e-c2a4-45f0-82d1-86cb079fe4c2 STEP: Creating a pod to test consume configMaps May 30 21:10:17.468: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f8b70141-c98a-44bf-a86c-57686b74317e" in namespace "projected-530" to be "success or failure" May 30 21:10:17.479: INFO: Pod "pod-projected-configmaps-f8b70141-c98a-44bf-a86c-57686b74317e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.22754ms May 30 21:10:19.483: INFO: Pod "pod-projected-configmaps-f8b70141-c98a-44bf-a86c-57686b74317e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015530697s May 30 21:10:21.628: INFO: Pod "pod-projected-configmaps-f8b70141-c98a-44bf-a86c-57686b74317e": Phase="Running", Reason="", readiness=true. Elapsed: 4.1608149s May 30 21:10:23.633: INFO: Pod "pod-projected-configmaps-f8b70141-c98a-44bf-a86c-57686b74317e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.165437463s STEP: Saw pod success May 30 21:10:23.633: INFO: Pod "pod-projected-configmaps-f8b70141-c98a-44bf-a86c-57686b74317e" satisfied condition "success or failure" May 30 21:10:23.636: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-f8b70141-c98a-44bf-a86c-57686b74317e container projected-configmap-volume-test: STEP: delete the pod May 30 21:10:23.660: INFO: Waiting for pod pod-projected-configmaps-f8b70141-c98a-44bf-a86c-57686b74317e to disappear May 30 21:10:23.664: INFO: Pod pod-projected-configmaps-f8b70141-c98a-44bf-a86c-57686b74317e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:10:23.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-530" for this suite. • [SLOW TEST:6.330 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":127,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:10:23.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-a7c8b477-115c-4f62-961a-64cd1299eaed STEP: Creating a pod to test consume configMaps May 30 21:10:23.804: INFO: Waiting up to 5m0s for pod "pod-configmaps-810a01a9-fa43-4a6a-97ae-66b5537522d0" in namespace "configmap-2494" to be "success or failure" May 30 21:10:23.817: INFO: Pod "pod-configmaps-810a01a9-fa43-4a6a-97ae-66b5537522d0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.164139ms May 30 21:10:25.874: INFO: Pod "pod-configmaps-810a01a9-fa43-4a6a-97ae-66b5537522d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069969316s May 30 21:10:27.879: INFO: Pod "pod-configmaps-810a01a9-fa43-4a6a-97ae-66b5537522d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075198632s STEP: Saw pod success May 30 21:10:27.879: INFO: Pod "pod-configmaps-810a01a9-fa43-4a6a-97ae-66b5537522d0" satisfied condition "success or failure" May 30 21:10:27.883: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-810a01a9-fa43-4a6a-97ae-66b5537522d0 container configmap-volume-test: STEP: delete the pod May 30 21:10:27.951: INFO: Waiting for pod pod-configmaps-810a01a9-fa43-4a6a-97ae-66b5537522d0 to disappear May 30 21:10:27.973: INFO: Pod pod-configmaps-810a01a9-fa43-4a6a-97ae-66b5537522d0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:10:27.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2494" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":134,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:10:28.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 30 21:10:28.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7264' May 30 21:10:28.452: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 30 21:10:28.452: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 May 30 21:10:28.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-7264' May 30 21:10:28.589: INFO: stderr: "" May 30 21:10:28.589: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:10:28.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7264" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":7,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:10:28.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:10:28.679: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 30 21:10:28.740: INFO: Number of nodes with available pods: 0 May 30 21:10:28.740: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 30 21:10:28.818: INFO: Number of nodes with available pods: 0 May 30 21:10:28.818: INFO: Node jerma-worker2 is running more than one daemon pod May 30 21:10:29.822: INFO: Number of nodes with available pods: 0 May 30 21:10:29.822: INFO: Node jerma-worker2 is running more than one daemon pod May 30 21:10:30.822: INFO: Number of nodes with available pods: 0 May 30 21:10:30.822: INFO: Node jerma-worker2 is running more than one daemon pod May 30 21:10:31.874: INFO: Number of nodes with available pods: 1 May 30 21:10:31.874: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 30 21:10:31.903: INFO: Number of nodes with available pods: 1 May 30 21:10:31.903: INFO: Number of running nodes: 0, number of available pods: 1 May 30 21:10:32.907: INFO: Number of nodes with available pods: 0 May 30 21:10:32.907: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 30 21:10:32.916: INFO: Number of nodes with available pods: 0 May 30 21:10:32.916: INFO: Node jerma-worker2 is running more than one daemon pod May 30 21:10:33.947: INFO: Number of nodes with available pods: 0 May 30 21:10:33.947: INFO: Node jerma-worker2 is running more than one daemon pod May 30 21:10:34.934: INFO: Number of nodes with available pods: 0 May 30 21:10:34.934: INFO: Node jerma-worker2 is running more than one daemon pod May 30 21:10:35.921: INFO: Number of nodes with available pods: 0 May 30 21:10:35.921: INFO: Node jerma-worker2 is running more than one daemon pod May 30 21:10:36.921: INFO: Number of nodes with available pods: 0 May 30 21:10:36.921: INFO: Node jerma-worker2 is running more than one daemon pod May 30 21:10:37.921: INFO: Number of nodes with available pods: 0 May 30 21:10:37.921: INFO: Node jerma-worker2 is running more than one daemon pod May 30 21:10:38.920: INFO: Number of nodes with available pods: 0 May 30 21:10:38.920: INFO: Node jerma-worker2 is running more than one daemon pod May 30 21:10:39.921: INFO: Number of nodes with available pods: 0 May 30 21:10:39.921: INFO: Node jerma-worker2 is running more than one daemon pod May 30 21:10:40.988: INFO: Number of nodes with available pods: 0 May 30 21:10:40.988: INFO: Node jerma-worker2 is running more than one daemon pod May 30 21:10:41.924: INFO: Number of nodes with available pods: 0 May 30 21:10:41.924: INFO: Node jerma-worker2 is running more than one daemon pod May 30 21:10:42.921: INFO: Number of nodes with available pods: 1 May 30 21:10:42.921: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7788, will wait for the garbage collector to delete the pods May 30 21:10:42.985: INFO: Deleting DaemonSet.extensions daemon-set took: 6.60302ms May 30 21:10:43.285: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.243785ms May 30 21:10:49.589: INFO: Number of nodes with available pods: 0 May 30 21:10:49.589: INFO: Number of running nodes: 0, number of available pods: 0 May 30 21:10:49.609: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7788/daemonsets","resourceVersion":"20423786"},"items":null} May 30 21:10:49.612: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7788/pods","resourceVersion":"20423786"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:10:49.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7788" for this suite. • [SLOW TEST:21.063 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":8,"skipped":181,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:10:49.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:11:06.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9097" for this suite. • [SLOW TEST:17.186 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":9,"skipped":208,"failed":0} S ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:11:06.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:11:11.093: INFO: Waiting up to 5m0s for pod "client-envvars-8a25de44-2884-4ea4-b1b1-d41a560aaf97" in namespace "pods-181" to be "success or failure" May 30 21:11:11.116: INFO: Pod "client-envvars-8a25de44-2884-4ea4-b1b1-d41a560aaf97": Phase="Pending", Reason="", readiness=false. Elapsed: 22.383119ms May 30 21:11:13.120: INFO: Pod "client-envvars-8a25de44-2884-4ea4-b1b1-d41a560aaf97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026187975s May 30 21:11:15.123: INFO: Pod "client-envvars-8a25de44-2884-4ea4-b1b1-d41a560aaf97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029690771s STEP: Saw pod success May 30 21:11:15.123: INFO: Pod "client-envvars-8a25de44-2884-4ea4-b1b1-d41a560aaf97" satisfied condition "success or failure" May 30 21:11:15.126: INFO: Trying to get logs from node jerma-worker pod client-envvars-8a25de44-2884-4ea4-b1b1-d41a560aaf97 container env3cont: STEP: delete the pod May 30 21:11:15.156: INFO: Waiting for pod client-envvars-8a25de44-2884-4ea4-b1b1-d41a560aaf97 to disappear May 30 21:11:15.180: INFO: Pod client-envvars-8a25de44-2884-4ea4-b1b1-d41a560aaf97 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:11:15.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-181" for this suite. • [SLOW TEST:8.341 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":209,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:11:15.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-536bbaff-072f-4722-b274-31d82ef75f59 STEP: Creating a pod to test consume configMaps May 30 21:11:15.247: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-706731c4-411a-44ca-acbe-1e0dcaf231ec" in namespace "projected-3261" to be "success or failure" May 30 21:11:15.279: INFO: Pod "pod-projected-configmaps-706731c4-411a-44ca-acbe-1e0dcaf231ec": Phase="Pending", Reason="", readiness=false. Elapsed: 32.038502ms May 30 21:11:17.282: INFO: Pod "pod-projected-configmaps-706731c4-411a-44ca-acbe-1e0dcaf231ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035729614s May 30 21:11:19.286: INFO: Pod "pod-projected-configmaps-706731c4-411a-44ca-acbe-1e0dcaf231ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038816035s STEP: Saw pod success May 30 21:11:19.286: INFO: Pod "pod-projected-configmaps-706731c4-411a-44ca-acbe-1e0dcaf231ec" satisfied condition "success or failure" May 30 21:11:19.288: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-706731c4-411a-44ca-acbe-1e0dcaf231ec container projected-configmap-volume-test: STEP: delete the pod May 30 21:11:19.324: INFO: Waiting for pod pod-projected-configmaps-706731c4-411a-44ca-acbe-1e0dcaf231ec to disappear May 30 21:11:19.340: INFO: Pod pod-projected-configmaps-706731c4-411a-44ca-acbe-1e0dcaf231ec no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:11:19.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3261" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":209,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:11:19.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:11:26.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4203" for this suite. • [SLOW TEST:7.117 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":12,"skipped":231,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:11:26.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:11:26.554: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-215dca3c-89c6-4341-a130-c12354866f20" in namespace "security-context-test-5399" to be "success or failure" May 30 21:11:26.568: INFO: Pod "alpine-nnp-false-215dca3c-89c6-4341-a130-c12354866f20": Phase="Pending", Reason="", readiness=false. Elapsed: 14.223337ms May 30 21:11:28.573: INFO: Pod "alpine-nnp-false-215dca3c-89c6-4341-a130-c12354866f20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018957346s May 30 21:11:30.578: INFO: Pod "alpine-nnp-false-215dca3c-89c6-4341-a130-c12354866f20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023868941s May 30 21:11:30.578: INFO: Pod "alpine-nnp-false-215dca3c-89c6-4341-a130-c12354866f20" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:11:30.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5399" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":245,"failed":0} SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:11:30.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 30 21:11:30.636: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:11:38.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8997" for this suite. • [SLOW TEST:8.091 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":14,"skipped":253,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:11:38.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:11:38.756: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 30 21:11:40.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8849 create -f -' May 30 21:11:46.617: INFO: stderr: "" May 30 21:11:46.618: INFO: stdout: "e2e-test-crd-publish-openapi-3917-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 30 21:11:46.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8849 delete e2e-test-crd-publish-openapi-3917-crds test-cr' May 30 21:11:46.755: INFO: stderr: "" May 30 21:11:46.755: INFO: stdout: "e2e-test-crd-publish-openapi-3917-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 30 21:11:46.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8849 apply -f -' May 30 21:11:49.295: INFO: stderr: "" May 30 21:11:49.295: INFO: stdout: "e2e-test-crd-publish-openapi-3917-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 30 21:11:49.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8849 delete e2e-test-crd-publish-openapi-3917-crds test-cr' May 30 21:11:49.409: INFO: stderr: "" May 30 21:11:49.409: INFO: stdout: "e2e-test-crd-publish-openapi-3917-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 30 21:11:49.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3917-crds' May 30 21:11:51.075: INFO: stderr: "" May 30 21:11:51.075: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3917-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:11:52.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8849" for this suite. • [SLOW TEST:14.285 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":15,"skipped":264,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:11:52.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:11:53.068: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 30 21:11:56.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-284 create -f -' May 30 21:12:00.542: INFO: stderr: "" May 30 21:12:00.542: INFO: stdout: "e2e-test-crd-publish-openapi-3484-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 30 21:12:00.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-284 delete e2e-test-crd-publish-openapi-3484-crds test-cr' May 30 21:12:00.660: INFO: stderr: "" May 30 21:12:00.660: INFO: stdout: "e2e-test-crd-publish-openapi-3484-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 30 21:12:00.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-284 apply -f -' May 30 21:12:03.870: INFO: stderr: "" May 30 21:12:03.870: INFO: stdout: "e2e-test-crd-publish-openapi-3484-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 30 21:12:03.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-284 delete e2e-test-crd-publish-openapi-3484-crds test-cr' May 30 21:12:03.989: INFO: stderr: "" May 30 21:12:03.989: INFO: stdout: "e2e-test-crd-publish-openapi-3484-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 30 21:12:03.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3484-crds' May 30 21:12:06.446: INFO: stderr: "" May 30 21:12:06.446: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3484-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:12:08.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-284" for this suite. • [SLOW TEST:15.370 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":16,"skipped":264,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:12:08.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 30 21:12:08.391: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:12:19.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8823" for this suite. • [SLOW TEST:10.997 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":269,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:12:19.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 30 21:12:19.430: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 30 21:12:19.456: INFO: Waiting for terminating namespaces to be deleted... May 30 21:12:19.459: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 30 21:12:19.464: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 21:12:19.464: INFO: Container kindnet-cni ready: true, restart count 2 May 30 21:12:19.464: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 21:12:19.464: INFO: Container kube-proxy ready: true, restart count 0 May 30 21:12:19.464: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 30 21:12:19.469: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 21:12:19.469: INFO: Container kindnet-cni ready: true, restart count 2 May 30 21:12:19.469: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 30 21:12:19.469: INFO: Container kube-bench ready: false, restart count 0 May 30 21:12:19.469: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 21:12:19.469: INFO: Container kube-proxy ready: true, restart count 0 May 30 21:12:19.469: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 30 21:12:19.469: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-df8c8912-1d6e-4776-babf-6b13990ca2f9 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-df8c8912-1d6e-4776-babf-6b13990ca2f9 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-df8c8912-1d6e-4776-babf-6b13990ca2f9 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:17:27.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4437" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.369 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":18,"skipped":303,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:17:27.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 21:17:28.437: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 21:17:30.447: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470248, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470248, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470248, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470248, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 21:17:33.476: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:17:33.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3611-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:17:34.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9284" for this suite. STEP: Destroying namespace "webhook-9284-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.930 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":19,"skipped":310,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:17:34.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-18 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-18 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-18 May 30 21:17:34.844: INFO: Found 0 stateful pods, waiting for 1 May 30 21:17:44.847: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 30 21:17:44.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 30 21:17:45.096: INFO: stderr: "I0530 21:17:44.981087 360 log.go:172] (0xc000a146e0) (0xc000962000) Create stream\nI0530 21:17:44.981246 360 log.go:172] (0xc000a146e0) (0xc000962000) Stream added, broadcasting: 1\nI0530 21:17:44.984588 360 log.go:172] (0xc000a146e0) Reply frame received for 1\nI0530 21:17:44.984643 360 log.go:172] (0xc000a146e0) (0xc000627ae0) Create stream\nI0530 21:17:44.984653 360 log.go:172] (0xc000a146e0) (0xc000627ae0) Stream added, broadcasting: 3\nI0530 21:17:44.985963 360 log.go:172] (0xc000a146e0) Reply frame received for 3\nI0530 21:17:44.986010 360 log.go:172] (0xc000a146e0) (0xc0009620a0) Create stream\nI0530 21:17:44.986037 360 log.go:172] (0xc000a146e0) (0xc0009620a0) Stream added, broadcasting: 5\nI0530 21:17:44.986929 360 log.go:172] (0xc000a146e0) Reply frame received for 5\nI0530 21:17:45.062077 360 log.go:172] (0xc000a146e0) Data frame received for 5\nI0530 21:17:45.062109 360 log.go:172] (0xc0009620a0) (5) Data frame handling\nI0530 21:17:45.062131 360 log.go:172] (0xc0009620a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0530 21:17:45.087827 360 log.go:172] (0xc000a146e0) Data frame received for 3\nI0530 21:17:45.087858 360 log.go:172] (0xc000627ae0) (3) Data frame handling\nI0530 21:17:45.087876 360 log.go:172] (0xc000627ae0) (3) Data frame sent\nI0530 21:17:45.087884 360 log.go:172] (0xc000a146e0) Data frame received for 3\nI0530 21:17:45.087889 360 log.go:172] (0xc000627ae0) (3) Data frame handling\nI0530 21:17:45.088262 360 log.go:172] (0xc000a146e0) Data frame received for 5\nI0530 21:17:45.088283 360 log.go:172] (0xc0009620a0) (5) Data frame handling\nI0530 21:17:45.090032 360 log.go:172] (0xc000a146e0) Data frame received for 1\nI0530 21:17:45.090048 360 log.go:172] (0xc000962000) (1) Data frame handling\nI0530 21:17:45.090055 360 log.go:172] (0xc000962000) (1) Data frame sent\nI0530 21:17:45.090064 360 log.go:172] (0xc000a146e0) (0xc000962000) Stream removed, broadcasting: 1\nI0530 21:17:45.090098 360 log.go:172] (0xc000a146e0) Go away received\nI0530 21:17:45.090328 360 log.go:172] (0xc000a146e0) (0xc000962000) Stream removed, broadcasting: 1\nI0530 21:17:45.090344 360 log.go:172] (0xc000a146e0) (0xc000627ae0) Stream removed, broadcasting: 3\nI0530 21:17:45.090352 360 log.go:172] (0xc000a146e0) (0xc0009620a0) Stream removed, broadcasting: 5\n" May 30 21:17:45.096: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 30 21:17:45.096: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 30 21:17:45.100: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 30 21:17:55.114: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 30 21:17:55.114: INFO: Waiting for statefulset status.replicas updated to 0 May 30 21:17:55.128: INFO: POD NODE PHASE GRACE CONDITIONS May 30 21:17:55.128: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC }] May 30 21:17:55.128: INFO: May 30 21:17:55.128: INFO: StatefulSet ss has not reached scale 3, at 1 May 30 21:17:56.132: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994763194s May 30 21:17:57.343: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990462432s May 30 21:17:58.374: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.779220517s May 30 21:17:59.381: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.747927055s May 30 21:18:00.392: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.741668977s May 30 21:18:01.397: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.730305183s May 30 21:18:02.404: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.725190618s May 30 21:18:03.410: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.7187312s May 30 21:18:04.429: INFO: Verifying statefulset ss doesn't scale past 3 for another 712.774944ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-18 May 30 21:18:05.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:18:05.668: INFO: stderr: "I0530 21:18:05.587087 381 log.go:172] (0xc0009f7810) (0xc0009e2820) Create stream\nI0530 21:18:05.587153 381 log.go:172] (0xc0009f7810) (0xc0009e2820) Stream added, broadcasting: 1\nI0530 21:18:05.593531 381 log.go:172] (0xc0009f7810) Reply frame received for 1\nI0530 21:18:05.593580 381 log.go:172] (0xc0009f7810) (0xc000707c20) Create stream\nI0530 21:18:05.593591 381 log.go:172] (0xc0009f7810) (0xc000707c20) Stream added, broadcasting: 3\nI0530 21:18:05.594535 381 log.go:172] (0xc0009f7810) Reply frame received for 3\nI0530 21:18:05.594567 381 log.go:172] (0xc0009f7810) (0xc000662820) Create stream\nI0530 21:18:05.594576 381 log.go:172] (0xc0009f7810) (0xc000662820) Stream added, broadcasting: 5\nI0530 21:18:05.595321 381 log.go:172] (0xc0009f7810) Reply frame received for 5\nI0530 21:18:05.661449 381 log.go:172] (0xc0009f7810) Data frame received for 3\nI0530 21:18:05.661505 381 log.go:172] (0xc000707c20) (3) Data frame handling\nI0530 21:18:05.661527 381 log.go:172] (0xc000707c20) (3) Data frame sent\nI0530 21:18:05.661542 381 log.go:172] (0xc0009f7810) Data frame received for 3\nI0530 21:18:05.661551 381 log.go:172] (0xc000707c20) (3) Data frame handling\nI0530 21:18:05.661564 381 log.go:172] (0xc0009f7810) Data frame received for 5\nI0530 21:18:05.661571 381 log.go:172] (0xc000662820) (5) Data frame handling\nI0530 21:18:05.661580 381 log.go:172] (0xc000662820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0530 21:18:05.661724 381 log.go:172] (0xc0009f7810) Data frame received for 5\nI0530 21:18:05.661752 381 log.go:172] (0xc000662820) (5) Data frame handling\nI0530 21:18:05.663076 381 log.go:172] (0xc0009f7810) Data frame received for 1\nI0530 21:18:05.663094 381 log.go:172] (0xc0009e2820) (1) Data frame handling\nI0530 21:18:05.663103 381 log.go:172] (0xc0009e2820) (1) Data frame sent\nI0530 21:18:05.663114 381 log.go:172] (0xc0009f7810) (0xc0009e2820) Stream removed, broadcasting: 1\nI0530 21:18:05.663131 381 log.go:172] (0xc0009f7810) Go away received\nI0530 21:18:05.663515 381 log.go:172] (0xc0009f7810) (0xc0009e2820) Stream removed, broadcasting: 1\nI0530 21:18:05.663534 381 log.go:172] (0xc0009f7810) (0xc000707c20) Stream removed, broadcasting: 3\nI0530 21:18:05.663543 381 log.go:172] (0xc0009f7810) (0xc000662820) Stream removed, broadcasting: 5\n" May 30 21:18:05.668: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 30 21:18:05.668: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 30 21:18:05.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:18:05.876: INFO: stderr: "I0530 21:18:05.799230 401 log.go:172] (0xc000ac6000) (0xc000944000) Create stream\nI0530 21:18:05.799296 401 log.go:172] (0xc000ac6000) (0xc000944000) Stream added, broadcasting: 1\nI0530 21:18:05.802527 401 log.go:172] (0xc000ac6000) Reply frame received for 1\nI0530 21:18:05.802677 401 log.go:172] (0xc000ac6000) (0xc00081c000) Create stream\nI0530 21:18:05.802703 401 log.go:172] (0xc000ac6000) (0xc00081c000) Stream added, broadcasting: 3\nI0530 21:18:05.806225 401 log.go:172] (0xc000ac6000) Reply frame received for 3\nI0530 21:18:05.806257 401 log.go:172] (0xc000ac6000) (0xc0009440a0) Create stream\nI0530 21:18:05.806268 401 log.go:172] (0xc000ac6000) (0xc0009440a0) Stream added, broadcasting: 5\nI0530 21:18:05.807133 401 log.go:172] (0xc000ac6000) Reply frame received for 5\nI0530 21:18:05.858660 401 log.go:172] (0xc000ac6000) Data frame received for 5\nI0530 21:18:05.858678 401 log.go:172] (0xc0009440a0) (5) Data frame handling\nI0530 21:18:05.858688 401 log.go:172] (0xc0009440a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0530 21:18:05.867764 401 log.go:172] (0xc000ac6000) Data frame received for 5\nI0530 21:18:05.867792 401 log.go:172] (0xc0009440a0) (5) Data frame handling\nI0530 21:18:05.867806 401 log.go:172] (0xc0009440a0) (5) Data frame sent\nI0530 21:18:05.867818 401 log.go:172] (0xc000ac6000) Data frame received for 5\nI0530 21:18:05.867830 401 log.go:172] (0xc0009440a0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0530 21:18:05.867838 401 log.go:172] (0xc000ac6000) Data frame received for 3\nI0530 21:18:05.867915 401 log.go:172] (0xc00081c000) (3) Data frame handling\nI0530 21:18:05.867945 401 log.go:172] (0xc00081c000) (3) Data frame sent\nI0530 21:18:05.867993 401 log.go:172] (0xc0009440a0) (5) Data frame sent\nI0530 21:18:05.868019 401 log.go:172] (0xc000ac6000) Data frame received for 5\nI0530 21:18:05.868036 401 log.go:172] (0xc0009440a0) (5) Data frame handling\nI0530 21:18:05.868061 401 log.go:172] (0xc000ac6000) Data frame received for 3\nI0530 21:18:05.868075 401 log.go:172] (0xc00081c000) (3) Data frame handling\nI0530 21:18:05.869914 401 log.go:172] (0xc000ac6000) Data frame received for 1\nI0530 21:18:05.869934 401 log.go:172] (0xc000944000) (1) Data frame handling\nI0530 21:18:05.869950 401 log.go:172] (0xc000944000) (1) Data frame sent\nI0530 21:18:05.869976 401 log.go:172] (0xc000ac6000) (0xc000944000) Stream removed, broadcasting: 1\nI0530 21:18:05.869993 401 log.go:172] (0xc000ac6000) Go away received\nI0530 21:18:05.870446 401 log.go:172] (0xc000ac6000) (0xc000944000) Stream removed, broadcasting: 1\nI0530 21:18:05.870471 401 log.go:172] (0xc000ac6000) (0xc00081c000) Stream removed, broadcasting: 3\nI0530 21:18:05.870483 401 log.go:172] (0xc000ac6000) (0xc0009440a0) Stream removed, broadcasting: 5\n" May 30 21:18:05.876: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 30 21:18:05.876: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 30 21:18:05.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:18:06.088: INFO: stderr: "I0530 21:18:06.016896 424 log.go:172] (0xc00073ea50) (0xc000b6a000) Create stream\nI0530 21:18:06.016972 424 log.go:172] (0xc00073ea50) (0xc000b6a000) Stream added, broadcasting: 1\nI0530 21:18:06.019483 424 log.go:172] (0xc00073ea50) Reply frame received for 1\nI0530 21:18:06.019542 424 log.go:172] (0xc00073ea50) (0xc0006199a0) Create stream\nI0530 21:18:06.019559 424 log.go:172] (0xc00073ea50) (0xc0006199a0) Stream added, broadcasting: 3\nI0530 21:18:06.020255 424 log.go:172] (0xc00073ea50) Reply frame received for 3\nI0530 21:18:06.020289 424 log.go:172] (0xc00073ea50) (0xc000b6a0a0) Create stream\nI0530 21:18:06.020303 424 log.go:172] (0xc00073ea50) (0xc000b6a0a0) Stream added, broadcasting: 5\nI0530 21:18:06.020980 424 log.go:172] (0xc00073ea50) Reply frame received for 5\nI0530 21:18:06.077867 424 log.go:172] (0xc00073ea50) Data frame received for 5\nI0530 21:18:06.077898 424 log.go:172] (0xc000b6a0a0) (5) Data frame handling\nI0530 21:18:06.077916 424 log.go:172] (0xc000b6a0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0530 21:18:06.077949 424 log.go:172] (0xc00073ea50) Data frame received for 3\nI0530 21:18:06.077961 424 log.go:172] (0xc0006199a0) (3) Data frame handling\nI0530 21:18:06.077975 424 log.go:172] (0xc0006199a0) (3) Data frame sent\nI0530 21:18:06.077990 424 log.go:172] (0xc00073ea50) Data frame received for 3\nI0530 21:18:06.078002 424 log.go:172] (0xc0006199a0) (3) Data frame handling\nI0530 21:18:06.078087 424 log.go:172] (0xc00073ea50) Data frame received for 5\nI0530 21:18:06.078103 424 log.go:172] (0xc000b6a0a0) (5) Data frame handling\nI0530 21:18:06.079818 424 log.go:172] (0xc00073ea50) Data frame received for 1\nI0530 21:18:06.079844 424 log.go:172] (0xc000b6a000) (1) Data frame handling\nI0530 21:18:06.079859 424 log.go:172] (0xc000b6a000) (1) Data frame sent\nI0530 21:18:06.079871 424 log.go:172] (0xc00073ea50) (0xc000b6a000) Stream removed, broadcasting: 1\nI0530 21:18:06.079892 424 log.go:172] (0xc00073ea50) Go away received\nI0530 21:18:06.080443 424 log.go:172] (0xc00073ea50) (0xc000b6a000) Stream removed, broadcasting: 1\nI0530 21:18:06.080477 424 log.go:172] (0xc00073ea50) (0xc0006199a0) Stream removed, broadcasting: 3\nI0530 21:18:06.080494 424 log.go:172] (0xc00073ea50) (0xc000b6a0a0) Stream removed, broadcasting: 5\n" May 30 21:18:06.088: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 30 21:18:06.088: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 30 21:18:06.108: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 30 21:18:16.133: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 30 21:18:16.133: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 30 21:18:16.133: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 30 21:18:16.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 30 21:18:16.370: INFO: stderr: "I0530 21:18:16.258646 448 log.go:172] (0xc000a10b00) (0xc00077a0a0) Create stream\nI0530 21:18:16.258704 448 log.go:172] (0xc000a10b00) (0xc00077a0a0) Stream added, broadcasting: 1\nI0530 21:18:16.261361 448 log.go:172] (0xc000a10b00) Reply frame received for 1\nI0530 21:18:16.261406 448 log.go:172] (0xc000a10b00) (0xc0005df2c0) Create stream\nI0530 21:18:16.261422 448 log.go:172] (0xc000a10b00) (0xc0005df2c0) Stream added, broadcasting: 3\nI0530 21:18:16.262539 448 log.go:172] (0xc000a10b00) Reply frame received for 3\nI0530 21:18:16.262584 448 log.go:172] (0xc000a10b00) (0xc0007ce000) Create stream\nI0530 21:18:16.262598 448 log.go:172] (0xc000a10b00) (0xc0007ce000) Stream added, broadcasting: 5\nI0530 21:18:16.263490 448 log.go:172] (0xc000a10b00) Reply frame received for 5\nI0530 21:18:16.361637 448 log.go:172] (0xc000a10b00) Data frame received for 5\nI0530 21:18:16.361661 448 log.go:172] (0xc0007ce000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0530 21:18:16.361694 448 log.go:172] (0xc000a10b00) Data frame received for 3\nI0530 21:18:16.361739 448 log.go:172] (0xc0005df2c0) (3) Data frame handling\nI0530 21:18:16.361769 448 log.go:172] (0xc0005df2c0) (3) Data frame sent\nI0530 21:18:16.361790 448 log.go:172] (0xc000a10b00) Data frame received for 3\nI0530 21:18:16.361811 448 log.go:172] (0xc0005df2c0) (3) Data frame handling\nI0530 21:18:16.361836 448 log.go:172] (0xc0007ce000) (5) Data frame sent\nI0530 21:18:16.361855 448 log.go:172] (0xc000a10b00) Data frame received for 5\nI0530 21:18:16.361863 448 log.go:172] (0xc0007ce000) (5) Data frame handling\nI0530 21:18:16.363180 448 log.go:172] (0xc000a10b00) Data frame received for 1\nI0530 21:18:16.363199 448 log.go:172] (0xc00077a0a0) (1) Data frame handling\nI0530 21:18:16.363208 448 log.go:172] (0xc00077a0a0) (1) Data frame sent\nI0530 21:18:16.363219 448 log.go:172] (0xc000a10b00) (0xc00077a0a0) Stream removed, broadcasting: 1\nI0530 21:18:16.363249 448 log.go:172] (0xc000a10b00) Go away received\nI0530 21:18:16.363751 448 log.go:172] (0xc000a10b00) (0xc00077a0a0) Stream removed, broadcasting: 1\nI0530 21:18:16.363773 448 log.go:172] (0xc000a10b00) (0xc0005df2c0) Stream removed, broadcasting: 3\nI0530 21:18:16.363786 448 log.go:172] (0xc000a10b00) (0xc0007ce000) Stream removed, broadcasting: 5\n" May 30 21:18:16.371: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 30 21:18:16.371: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 30 21:18:16.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 30 21:18:16.624: INFO: stderr: "I0530 21:18:16.517301 468 log.go:172] (0xc00092e6e0) (0xc00062bc20) Create stream\nI0530 21:18:16.517382 468 log.go:172] (0xc00092e6e0) (0xc00062bc20) Stream added, broadcasting: 1\nI0530 21:18:16.519841 468 log.go:172] (0xc00092e6e0) Reply frame received for 1\nI0530 21:18:16.519894 468 log.go:172] (0xc00092e6e0) (0xc00062bcc0) Create stream\nI0530 21:18:16.519908 468 log.go:172] (0xc00092e6e0) (0xc00062bcc0) Stream added, broadcasting: 3\nI0530 21:18:16.520732 468 log.go:172] (0xc00092e6e0) Reply frame received for 3\nI0530 21:18:16.520778 468 log.go:172] (0xc00092e6e0) (0xc000a0c000) Create stream\nI0530 21:18:16.520794 468 log.go:172] (0xc00092e6e0) (0xc000a0c000) Stream added, broadcasting: 5\nI0530 21:18:16.522196 468 log.go:172] (0xc00092e6e0) Reply frame received for 5\nI0530 21:18:16.586226 468 log.go:172] (0xc00092e6e0) Data frame received for 5\nI0530 21:18:16.586248 468 log.go:172] (0xc000a0c000) (5) Data frame handling\nI0530 21:18:16.586267 468 log.go:172] (0xc000a0c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0530 21:18:16.616354 468 log.go:172] (0xc00092e6e0) Data frame received for 5\nI0530 21:18:16.616385 468 log.go:172] (0xc000a0c000) (5) Data frame handling\nI0530 21:18:16.616419 468 log.go:172] (0xc00092e6e0) Data frame received for 3\nI0530 21:18:16.616435 468 log.go:172] (0xc00062bcc0) (3) Data frame handling\nI0530 21:18:16.616448 468 log.go:172] (0xc00062bcc0) (3) Data frame sent\nI0530 21:18:16.616461 468 log.go:172] (0xc00092e6e0) Data frame received for 3\nI0530 21:18:16.616473 468 log.go:172] (0xc00062bcc0) (3) Data frame handling\nI0530 21:18:16.618498 468 log.go:172] (0xc00092e6e0) Data frame received for 1\nI0530 21:18:16.618536 468 log.go:172] (0xc00062bc20) (1) Data frame handling\nI0530 21:18:16.618561 468 log.go:172] (0xc00062bc20) (1) Data frame sent\nI0530 21:18:16.618600 468 log.go:172] (0xc00092e6e0) (0xc00062bc20) Stream removed, broadcasting: 1\nI0530 21:18:16.618626 468 log.go:172] (0xc00092e6e0) Go away received\nI0530 21:18:16.618888 468 log.go:172] (0xc00092e6e0) (0xc00062bc20) Stream removed, broadcasting: 1\nI0530 21:18:16.618904 468 log.go:172] (0xc00092e6e0) (0xc00062bcc0) Stream removed, broadcasting: 3\nI0530 21:18:16.618910 468 log.go:172] (0xc00092e6e0) (0xc000a0c000) Stream removed, broadcasting: 5\n" May 30 21:18:16.624: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 30 21:18:16.624: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 30 21:18:16.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 30 21:18:16.882: INFO: stderr: "I0530 21:18:16.767823 489 log.go:172] (0xc000906a50) (0xc00065fea0) Create stream\nI0530 21:18:16.767888 489 log.go:172] (0xc000906a50) (0xc00065fea0) Stream added, broadcasting: 1\nI0530 21:18:16.771092 489 log.go:172] (0xc000906a50) Reply frame received for 1\nI0530 21:18:16.771160 489 log.go:172] (0xc000906a50) (0xc0005d8780) Create stream\nI0530 21:18:16.771189 489 log.go:172] (0xc000906a50) (0xc0005d8780) Stream added, broadcasting: 3\nI0530 21:18:16.772430 489 log.go:172] (0xc000906a50) Reply frame received for 3\nI0530 21:18:16.772479 489 log.go:172] (0xc000906a50) (0xc00065ff40) Create stream\nI0530 21:18:16.772505 489 log.go:172] (0xc000906a50) (0xc00065ff40) Stream added, broadcasting: 5\nI0530 21:18:16.773789 489 log.go:172] (0xc000906a50) Reply frame received for 5\nI0530 21:18:16.843110 489 log.go:172] (0xc000906a50) Data frame received for 5\nI0530 21:18:16.843132 489 log.go:172] (0xc00065ff40) (5) Data frame handling\nI0530 21:18:16.843147 489 log.go:172] (0xc00065ff40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0530 21:18:16.874421 489 log.go:172] (0xc000906a50) Data frame received for 3\nI0530 21:18:16.874468 489 log.go:172] (0xc0005d8780) (3) Data frame handling\nI0530 21:18:16.874499 489 log.go:172] (0xc0005d8780) (3) Data frame sent\nI0530 21:18:16.874514 489 log.go:172] (0xc000906a50) Data frame received for 3\nI0530 21:18:16.874527 489 log.go:172] (0xc0005d8780) (3) Data frame handling\nI0530 21:18:16.874580 489 log.go:172] (0xc000906a50) Data frame received for 5\nI0530 21:18:16.874627 489 log.go:172] (0xc00065ff40) (5) Data frame handling\nI0530 21:18:16.875945 489 log.go:172] (0xc000906a50) Data frame received for 1\nI0530 21:18:16.875975 489 log.go:172] (0xc00065fea0) (1) Data frame handling\nI0530 21:18:16.876012 489 log.go:172] (0xc00065fea0) (1) Data frame sent\nI0530 21:18:16.876041 489 log.go:172] (0xc000906a50) (0xc00065fea0) Stream removed, broadcasting: 1\nI0530 21:18:16.876075 489 log.go:172] (0xc000906a50) Go away received\nI0530 21:18:16.876510 489 log.go:172] (0xc000906a50) (0xc00065fea0) Stream removed, broadcasting: 1\nI0530 21:18:16.876533 489 log.go:172] (0xc000906a50) (0xc0005d8780) Stream removed, broadcasting: 3\nI0530 21:18:16.876549 489 log.go:172] (0xc000906a50) (0xc00065ff40) Stream removed, broadcasting: 5\n" May 30 21:18:16.882: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 30 21:18:16.882: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 30 21:18:16.882: INFO: Waiting for statefulset status.replicas updated to 0 May 30 21:18:16.886: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 30 21:18:26.894: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 30 21:18:26.894: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 30 21:18:26.894: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 30 21:18:26.907: INFO: POD NODE PHASE GRACE CONDITIONS May 30 21:18:26.907: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC }] May 30 21:18:26.907: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC }] May 30 21:18:26.907: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC }] May 30 21:18:26.907: INFO: May 30 21:18:26.907: INFO: StatefulSet ss has not reached scale 0, at 3 May 30 21:18:28.013: INFO: POD NODE PHASE GRACE CONDITIONS May 30 21:18:28.013: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC }] May 30 21:18:28.013: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC }] May 30 21:18:28.013: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC }] May 30 21:18:28.013: INFO: May 30 21:18:28.013: INFO: StatefulSet ss has not reached scale 0, at 3 May 30 21:18:29.019: INFO: POD NODE PHASE GRACE CONDITIONS May 30 21:18:29.019: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC }] May 30 21:18:29.019: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC }] May 30 21:18:29.019: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC }] May 30 21:18:29.019: INFO: May 30 21:18:29.019: INFO: StatefulSet ss has not reached scale 0, at 3 May 30 21:18:30.023: INFO: POD NODE PHASE GRACE CONDITIONS May 30 21:18:30.023: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC }] May 30 21:18:30.023: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC }] May 30 21:18:30.023: INFO: May 30 21:18:30.023: INFO: StatefulSet ss has not reached scale 0, at 2 May 30 21:18:31.028: INFO: POD NODE PHASE GRACE CONDITIONS May 30 21:18:31.028: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC }] May 30 21:18:31.029: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC }] May 30 21:18:31.029: INFO: May 30 21:18:31.029: INFO: StatefulSet ss has not reached scale 0, at 2 May 30 21:18:32.034: INFO: POD NODE PHASE GRACE CONDITIONS May 30 21:18:32.034: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC }] May 30 21:18:32.034: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC }] May 30 21:18:32.034: INFO: May 30 21:18:32.034: INFO: StatefulSet ss has not reached scale 0, at 2 May 30 21:18:33.039: INFO: POD NODE PHASE GRACE CONDITIONS May 30 21:18:33.039: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC }] May 30 21:18:33.039: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC }] May 30 21:18:33.039: INFO: May 30 21:18:33.039: INFO: StatefulSet ss has not reached scale 0, at 2 May 30 21:18:34.045: INFO: POD NODE PHASE GRACE CONDITIONS May 30 21:18:34.045: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC }] May 30 21:18:34.045: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC }] May 30 21:18:34.045: INFO: May 30 21:18:34.045: INFO: StatefulSet ss has not reached scale 0, at 2 May 30 21:18:35.049: INFO: POD NODE PHASE GRACE CONDITIONS May 30 21:18:35.049: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC }] May 30 21:18:35.049: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC }] May 30 21:18:35.049: INFO: May 30 21:18:35.049: INFO: StatefulSet ss has not reached scale 0, at 2 May 30 21:18:36.055: INFO: POD NODE PHASE GRACE CONDITIONS May 30 21:18:36.055: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:34 +0000 UTC }] May 30 21:18:36.055: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:18:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 21:17:55 +0000 UTC }] May 30 21:18:36.055: INFO: May 30 21:18:36.055: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-18 May 30 21:18:37.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:18:37.201: INFO: rc: 1 May 30 21:18:37.201: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 30 21:18:47.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:18:47.304: INFO: rc: 1 May 30 21:18:47.304: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:18:57.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:18:57.409: INFO: rc: 1 May 30 21:18:57.409: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:19:07.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:19:07.504: INFO: rc: 1 May 30 21:19:07.504: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:19:17.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:19:17.584: INFO: rc: 1 May 30 21:19:17.584: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:19:27.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:19:27.676: INFO: rc: 1 May 30 21:19:27.676: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:19:37.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:19:37.781: INFO: rc: 1 May 30 21:19:37.781: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:19:47.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:19:47.876: INFO: rc: 1 May 30 21:19:47.876: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:19:57.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:19:57.986: INFO: rc: 1 May 30 21:19:57.986: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:20:07.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:20:08.096: INFO: rc: 1 May 30 21:20:08.096: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:20:18.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:20:18.195: INFO: rc: 1 May 30 21:20:18.195: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:20:28.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:20:28.283: INFO: rc: 1 May 30 21:20:28.283: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:20:38.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:20:38.377: INFO: rc: 1 May 30 21:20:38.377: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:20:48.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:20:48.476: INFO: rc: 1 May 30 21:20:48.476: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:20:58.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:20:58.576: INFO: rc: 1 May 30 21:20:58.576: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:21:08.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:21:08.671: INFO: rc: 1 May 30 21:21:08.672: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:21:18.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:21:18.777: INFO: rc: 1 May 30 21:21:18.777: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:21:28.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:21:28.882: INFO: rc: 1 May 30 21:21:28.882: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:21:38.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:21:38.983: INFO: rc: 1 May 30 21:21:38.983: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:21:48.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:21:49.145: INFO: rc: 1 May 30 21:21:49.145: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:21:59.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:22:00.951: INFO: rc: 1 May 30 21:22:00.951: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:22:10.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:22:12.178: INFO: rc: 1 May 30 21:22:12.178: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:22:22.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:22:22.273: INFO: rc: 1 May 30 21:22:22.273: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:22:32.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:22:32.371: INFO: rc: 1 May 30 21:22:32.372: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:22:42.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:22:42.475: INFO: rc: 1 May 30 21:22:42.475: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:22:52.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:22:52.571: INFO: rc: 1 May 30 21:22:52.571: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:23:02.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:23:02.676: INFO: rc: 1 May 30 21:23:02.676: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:23:12.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:23:12.790: INFO: rc: 1 May 30 21:23:12.790: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:23:22.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:23:22.884: INFO: rc: 1 May 30 21:23:22.884: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:23:32.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:23:32.987: INFO: rc: 1 May 30 21:23:32.987: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 30 21:23:42.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-18 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:23:43.091: INFO: rc: 1 May 30 21:23:43.091: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: May 30 21:23:43.091: INFO: Scaling statefulset ss to 0 May 30 21:23:43.116: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 30 21:23:43.118: INFO: Deleting all statefulset in ns statefulset-18 May 30 21:23:43.120: INFO: Scaling statefulset ss to 0 May 30 21:23:43.128: INFO: Waiting for statefulset status.replicas updated to 0 May 30 21:23:43.131: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:23:43.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-18" for this suite. • [SLOW TEST:368.518 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":20,"skipped":318,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:23:43.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 21:23:43.728: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 21:23:45.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470623, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470623, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470623, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470623, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 21:23:47.790: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470623, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470623, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470623, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470623, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 21:23:50.862: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:23:50.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3126" for this suite. STEP: Destroying namespace "webhook-3126-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.964 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":21,"skipped":321,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:23:51.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-5166/configmap-test-a0e8ba77-5c25-4c72-a6fc-9bc7477d6705 STEP: Creating a pod to test consume configMaps May 30 21:23:51.241: INFO: Waiting up to 5m0s for pod "pod-configmaps-16f35c7a-72cd-49ce-8707-9e4aa88f4fde" in namespace "configmap-5166" to be "success or failure" May 30 21:23:51.254: INFO: Pod "pod-configmaps-16f35c7a-72cd-49ce-8707-9e4aa88f4fde": Phase="Pending", Reason="", readiness=false. Elapsed: 13.198188ms May 30 21:23:53.258: INFO: Pod "pod-configmaps-16f35c7a-72cd-49ce-8707-9e4aa88f4fde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017360568s May 30 21:23:55.262: INFO: Pod "pod-configmaps-16f35c7a-72cd-49ce-8707-9e4aa88f4fde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021752427s STEP: Saw pod success May 30 21:23:55.262: INFO: Pod "pod-configmaps-16f35c7a-72cd-49ce-8707-9e4aa88f4fde" satisfied condition "success or failure" May 30 21:23:55.266: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-16f35c7a-72cd-49ce-8707-9e4aa88f4fde container env-test: STEP: delete the pod May 30 21:23:55.300: INFO: Waiting for pod pod-configmaps-16f35c7a-72cd-49ce-8707-9e4aa88f4fde to disappear May 30 21:23:55.320: INFO: Pod pod-configmaps-16f35c7a-72cd-49ce-8707-9e4aa88f4fde no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:23:55.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5166" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":334,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:23:55.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 30 21:23:55.415: INFO: Waiting up to 5m0s for pod "downwardapi-volume-113bc853-88db-4193-9ba3-af04fd5d03ee" in namespace "downward-api-4629" to be "success or failure" May 30 21:23:55.422: INFO: Pod "downwardapi-volume-113bc853-88db-4193-9ba3-af04fd5d03ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.183706ms May 30 21:23:57.503: INFO: Pod "downwardapi-volume-113bc853-88db-4193-9ba3-af04fd5d03ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087184442s May 30 21:23:59.507: INFO: Pod "downwardapi-volume-113bc853-88db-4193-9ba3-af04fd5d03ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091741372s STEP: Saw pod success May 30 21:23:59.507: INFO: Pod "downwardapi-volume-113bc853-88db-4193-9ba3-af04fd5d03ee" satisfied condition "success or failure" May 30 21:23:59.511: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-113bc853-88db-4193-9ba3-af04fd5d03ee container client-container: STEP: delete the pod May 30 21:23:59.556: INFO: Waiting for pod downwardapi-volume-113bc853-88db-4193-9ba3-af04fd5d03ee to disappear May 30 21:23:59.592: INFO: Pod downwardapi-volume-113bc853-88db-4193-9ba3-af04fd5d03ee no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:23:59.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4629" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":362,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:23:59.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions May 30 21:23:59.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 30 21:23:59.848: INFO: stderr: "" May 30 21:23:59.848: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:23:59.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2533" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":24,"skipped":397,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:23:59.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-adcb6409-70d0-4868-a7a1-8d20a413e613 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:24:06.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7395" for this suite. • [SLOW TEST:6.184 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":401,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:24:06.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:24:17.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9508" for this suite. • [SLOW TEST:11.176 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":26,"skipped":404,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:24:17.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 30 21:24:17.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1954' May 30 21:24:17.390: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 30 21:24:17.390: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc May 30 21:24:17.432: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-spvwn] May 30 21:24:17.432: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-spvwn" in namespace "kubectl-1954" to be "running and ready" May 30 21:24:17.503: INFO: Pod "e2e-test-httpd-rc-spvwn": Phase="Pending", Reason="", readiness=false. Elapsed: 70.753045ms May 30 21:24:19.522: INFO: Pod "e2e-test-httpd-rc-spvwn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090213593s May 30 21:24:21.527: INFO: Pod "e2e-test-httpd-rc-spvwn": Phase="Running", Reason="", readiness=true. Elapsed: 4.094514274s May 30 21:24:21.527: INFO: Pod "e2e-test-httpd-rc-spvwn" satisfied condition "running and ready" May 30 21:24:21.527: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-spvwn] May 30 21:24:21.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-1954' May 30 21:24:21.643: INFO: stderr: "" May 30 21:24:21.643: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.185. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.185. Set the 'ServerName' directive globally to suppress this message\n[Sat May 30 21:24:19.954863 2020] [mpm_event:notice] [pid 1:tid 140027632835432] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat May 30 21:24:19.954932 2020] [core:notice] [pid 1:tid 140027632835432] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 May 30 21:24:21.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1954' May 30 21:24:21.745: INFO: stderr: "" May 30 21:24:21.745: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:24:21.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1954" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":27,"skipped":429,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:24:21.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium May 30 21:24:21.826: INFO: Waiting up to 5m0s for pod "pod-1da3781d-4a48-4ee0-9c0a-4b0821187e54" in namespace "emptydir-8198" to be "success or failure" May 30 21:24:21.829: INFO: Pod "pod-1da3781d-4a48-4ee0-9c0a-4b0821187e54": Phase="Pending", Reason="", readiness=false. Elapsed: 3.235636ms May 30 21:24:23.856: INFO: Pod "pod-1da3781d-4a48-4ee0-9c0a-4b0821187e54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030445573s May 30 21:24:25.860: INFO: Pod "pod-1da3781d-4a48-4ee0-9c0a-4b0821187e54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034531155s STEP: Saw pod success May 30 21:24:25.860: INFO: Pod "pod-1da3781d-4a48-4ee0-9c0a-4b0821187e54" satisfied condition "success or failure" May 30 21:24:25.863: INFO: Trying to get logs from node jerma-worker2 pod pod-1da3781d-4a48-4ee0-9c0a-4b0821187e54 container test-container: STEP: delete the pod May 30 21:24:25.884: INFO: Waiting for pod pod-1da3781d-4a48-4ee0-9c0a-4b0821187e54 to disappear May 30 21:24:25.889: INFO: Pod pod-1da3781d-4a48-4ee0-9c0a-4b0821187e54 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:24:25.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8198" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":452,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:24:25.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod May 30 21:24:25.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-6054 -- logs-generator --log-lines-total 100 --run-duration 20s' May 30 21:24:26.071: INFO: stderr: "" May 30 21:24:26.071: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. May 30 21:24:26.071: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 30 21:24:26.071: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6054" to be "running and ready, or succeeded" May 30 21:24:26.102: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 30.825547ms May 30 21:24:28.168: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096112436s May 30 21:24:30.172: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.100602804s May 30 21:24:30.172: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 30 21:24:30.172: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 30 21:24:30.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6054' May 30 21:24:30.302: INFO: stderr: "" May 30 21:24:30.302: INFO: stdout: "I0530 21:24:28.634462 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/khh 452\nI0530 21:24:28.834608 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/hb6 376\nI0530 21:24:29.034685 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/q8t 418\nI0530 21:24:29.234633 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/2j2 390\nI0530 21:24:29.434624 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/n5q 442\nI0530 21:24:29.634666 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/zgw 211\nI0530 21:24:29.834703 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/tg85 415\nI0530 21:24:30.034700 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/rp8 596\nI0530 21:24:30.234635 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/xxb2 512\n" STEP: limiting log lines May 30 21:24:30.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6054 --tail=1' May 30 21:24:30.409: INFO: stderr: "" May 30 21:24:30.409: INFO: stdout: "I0530 21:24:30.234635 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/xxb2 512\n" May 30 21:24:30.409: INFO: got output "I0530 21:24:30.234635 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/xxb2 512\n" STEP: limiting log bytes May 30 21:24:30.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6054 --limit-bytes=1' May 30 21:24:30.523: INFO: stderr: "" May 30 21:24:30.523: INFO: stdout: "I" May 30 21:24:30.523: INFO: got output "I" STEP: exposing timestamps May 30 21:24:30.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6054 --tail=1 --timestamps' May 30 21:24:30.649: INFO: stderr: "" May 30 21:24:30.649: INFO: stdout: "2020-05-30T21:24:30.634874809Z I0530 21:24:30.634695 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/vw4 266\n" May 30 21:24:30.649: INFO: got output "2020-05-30T21:24:30.634874809Z I0530 21:24:30.634695 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/vw4 266\n" STEP: restricting to a time range May 30 21:24:33.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6054 --since=1s' May 30 21:24:33.258: INFO: stderr: "" May 30 21:24:33.258: INFO: stdout: "I0530 21:24:32.434642 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/rxs2 502\nI0530 21:24:32.634719 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/6wdm 374\nI0530 21:24:32.834689 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/ns/pods/2h9 536\nI0530 21:24:33.034698 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/5sd 316\nI0530 21:24:33.234633 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/nt78 246\n" May 30 21:24:33.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6054 --since=24h' May 30 21:24:33.369: INFO: stderr: "" May 30 21:24:33.369: INFO: stdout: "I0530 21:24:28.634462 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/khh 452\nI0530 21:24:28.834608 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/hb6 376\nI0530 21:24:29.034685 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/q8t 418\nI0530 21:24:29.234633 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/2j2 390\nI0530 21:24:29.434624 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/n5q 442\nI0530 21:24:29.634666 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/zgw 211\nI0530 21:24:29.834703 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/tg85 415\nI0530 21:24:30.034700 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/rp8 596\nI0530 21:24:30.234635 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/xxb2 512\nI0530 21:24:30.434643 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/k9s7 445\nI0530 21:24:30.634695 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/vw4 266\nI0530 21:24:30.834669 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/9l6z 569\nI0530 21:24:31.034663 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/6f5 595\nI0530 21:24:31.234685 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/pmj 494\nI0530 21:24:31.434664 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/w4p 328\nI0530 21:24:31.634649 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/cdrw 237\nI0530 21:24:31.834705 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/n7w 443\nI0530 21:24:32.034680 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/prbs 393\nI0530 21:24:32.234669 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/kjk 297\nI0530 21:24:32.434642 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/rxs2 502\nI0530 21:24:32.634719 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/6wdm 374\nI0530 21:24:32.834689 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/ns/pods/2h9 536\nI0530 21:24:33.034698 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/5sd 316\nI0530 21:24:33.234633 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/nt78 246\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 May 30 21:24:33.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6054' May 30 21:24:39.481: INFO: stderr: "" May 30 21:24:39.481: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:24:39.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6054" for this suite. • [SLOW TEST:13.591 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":29,"skipped":480,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:24:39.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:24:50.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1049" for this suite. • [SLOW TEST:11.201 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":30,"skipped":484,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:24:50.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-8199 STEP: creating replication controller nodeport-test in namespace services-8199 I0530 21:24:50.846566 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-8199, replica count: 2 I0530 21:24:53.896932 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 21:24:56.897455 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 30 21:24:56.897: INFO: Creating new exec pod May 30 21:25:01.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8199 execpodxb249 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 30 21:25:02.143: INFO: stderr: "I0530 21:25:02.048246 1408 log.go:172] (0xc000b12c60) (0xc000b0a500) Create stream\nI0530 21:25:02.048308 1408 log.go:172] (0xc000b12c60) (0xc000b0a500) Stream added, broadcasting: 1\nI0530 21:25:02.050670 1408 log.go:172] (0xc000b12c60) Reply frame received for 1\nI0530 21:25:02.050700 1408 log.go:172] (0xc000b12c60) (0xc000b0a5a0) Create stream\nI0530 21:25:02.050710 1408 log.go:172] (0xc000b12c60) (0xc000b0a5a0) Stream added, broadcasting: 3\nI0530 21:25:02.051819 1408 log.go:172] (0xc000b12c60) Reply frame received for 3\nI0530 21:25:02.052153 1408 log.go:172] (0xc000b12c60) (0xc000aa40a0) Create stream\nI0530 21:25:02.052175 1408 log.go:172] (0xc000b12c60) (0xc000aa40a0) Stream added, broadcasting: 5\nI0530 21:25:02.054567 1408 log.go:172] (0xc000b12c60) Reply frame received for 5\nI0530 21:25:02.133921 1408 log.go:172] (0xc000b12c60) Data frame received for 5\nI0530 21:25:02.133950 1408 log.go:172] (0xc000aa40a0) (5) Data frame handling\nI0530 21:25:02.133968 1408 log.go:172] (0xc000aa40a0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0530 21:25:02.134856 1408 log.go:172] (0xc000b12c60) Data frame received for 5\nI0530 21:25:02.134886 1408 log.go:172] (0xc000aa40a0) (5) Data frame handling\nI0530 21:25:02.134915 1408 log.go:172] (0xc000aa40a0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0530 21:25:02.134954 1408 log.go:172] (0xc000b12c60) Data frame received for 3\nI0530 21:25:02.134971 1408 log.go:172] (0xc000b0a5a0) (3) Data frame handling\nI0530 21:25:02.135944 1408 log.go:172] (0xc000b12c60) Data frame received for 5\nI0530 21:25:02.135970 1408 log.go:172] (0xc000aa40a0) (5) Data frame handling\nI0530 21:25:02.137462 1408 log.go:172] (0xc000b12c60) Data frame received for 1\nI0530 21:25:02.137563 1408 log.go:172] (0xc000b0a500) (1) Data frame handling\nI0530 21:25:02.137604 1408 log.go:172] (0xc000b0a500) (1) Data frame sent\nI0530 21:25:02.137633 1408 log.go:172] (0xc000b12c60) (0xc000b0a500) Stream removed, broadcasting: 1\nI0530 21:25:02.137740 1408 log.go:172] (0xc000b12c60) Go away received\nI0530 21:25:02.138084 1408 log.go:172] (0xc000b12c60) (0xc000b0a500) Stream removed, broadcasting: 1\nI0530 21:25:02.138119 1408 log.go:172] (0xc000b12c60) (0xc000b0a5a0) Stream removed, broadcasting: 3\nI0530 21:25:02.138136 1408 log.go:172] (0xc000b12c60) (0xc000aa40a0) Stream removed, broadcasting: 5\n" May 30 21:25:02.143: INFO: stdout: "" May 30 21:25:02.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8199 execpodxb249 -- /bin/sh -x -c nc -zv -t -w 2 10.107.120.93 80' May 30 21:25:02.346: INFO: stderr: "I0530 21:25:02.269486 1431 log.go:172] (0xc000900000) (0xc0006f74a0) Create stream\nI0530 21:25:02.269584 1431 log.go:172] (0xc000900000) (0xc0006f74a0) Stream added, broadcasting: 1\nI0530 21:25:02.272795 1431 log.go:172] (0xc000900000) Reply frame received for 1\nI0530 21:25:02.272836 1431 log.go:172] (0xc000900000) (0xc00091a000) Create stream\nI0530 21:25:02.272846 1431 log.go:172] (0xc000900000) (0xc00091a000) Stream added, broadcasting: 3\nI0530 21:25:02.273955 1431 log.go:172] (0xc000900000) Reply frame received for 3\nI0530 21:25:02.274017 1431 log.go:172] (0xc000900000) (0xc000afa000) Create stream\nI0530 21:25:02.274041 1431 log.go:172] (0xc000900000) (0xc000afa000) Stream added, broadcasting: 5\nI0530 21:25:02.274905 1431 log.go:172] (0xc000900000) Reply frame received for 5\nI0530 21:25:02.339009 1431 log.go:172] (0xc000900000) Data frame received for 5\nI0530 21:25:02.339080 1431 log.go:172] (0xc000afa000) (5) Data frame handling\nI0530 21:25:02.339103 1431 log.go:172] (0xc000afa000) (5) Data frame sent\nI0530 21:25:02.339123 1431 log.go:172] (0xc000900000) Data frame received for 5\nI0530 21:25:02.339141 1431 log.go:172] (0xc000afa000) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.120.93 80\nConnection to 10.107.120.93 80 port [tcp/http] succeeded!\nI0530 21:25:02.339185 1431 log.go:172] (0xc000900000) Data frame received for 3\nI0530 21:25:02.339199 1431 log.go:172] (0xc00091a000) (3) Data frame handling\nI0530 21:25:02.340460 1431 log.go:172] (0xc000900000) Data frame received for 1\nI0530 21:25:02.340479 1431 log.go:172] (0xc0006f74a0) (1) Data frame handling\nI0530 21:25:02.340498 1431 log.go:172] (0xc0006f74a0) (1) Data frame sent\nI0530 21:25:02.340515 1431 log.go:172] (0xc000900000) (0xc0006f74a0) Stream removed, broadcasting: 1\nI0530 21:25:02.340588 1431 log.go:172] (0xc000900000) Go away received\nI0530 21:25:02.340809 1431 log.go:172] (0xc000900000) (0xc0006f74a0) Stream removed, broadcasting: 1\nI0530 21:25:02.340827 1431 log.go:172] (0xc000900000) (0xc00091a000) Stream removed, broadcasting: 3\nI0530 21:25:02.340837 1431 log.go:172] (0xc000900000) (0xc000afa000) Stream removed, broadcasting: 5\n" May 30 21:25:02.346: INFO: stdout: "" May 30 21:25:02.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8199 execpodxb249 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30474' May 30 21:25:02.560: INFO: stderr: "I0530 21:25:02.485804 1452 log.go:172] (0xc0003d8210) (0xc0006d1b80) Create stream\nI0530 21:25:02.485875 1452 log.go:172] (0xc0003d8210) (0xc0006d1b80) Stream added, broadcasting: 1\nI0530 21:25:02.490013 1452 log.go:172] (0xc0003d8210) Reply frame received for 1\nI0530 21:25:02.490066 1452 log.go:172] (0xc0003d8210) (0xc0006d1d60) Create stream\nI0530 21:25:02.490090 1452 log.go:172] (0xc0003d8210) (0xc0006d1d60) Stream added, broadcasting: 3\nI0530 21:25:02.490989 1452 log.go:172] (0xc0003d8210) Reply frame received for 3\nI0530 21:25:02.491033 1452 log.go:172] (0xc0003d8210) (0xc000632000) Create stream\nI0530 21:25:02.491045 1452 log.go:172] (0xc0003d8210) (0xc000632000) Stream added, broadcasting: 5\nI0530 21:25:02.492160 1452 log.go:172] (0xc0003d8210) Reply frame received for 5\nI0530 21:25:02.554143 1452 log.go:172] (0xc0003d8210) Data frame received for 3\nI0530 21:25:02.554191 1452 log.go:172] (0xc0006d1d60) (3) Data frame handling\nI0530 21:25:02.554226 1452 log.go:172] (0xc0003d8210) Data frame received for 5\nI0530 21:25:02.554245 1452 log.go:172] (0xc000632000) (5) Data frame handling\nI0530 21:25:02.554263 1452 log.go:172] (0xc000632000) (5) Data frame sent\nI0530 21:25:02.554279 1452 log.go:172] (0xc0003d8210) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.10 30474\nConnection to 172.17.0.10 30474 port [tcp/30474] succeeded!\nI0530 21:25:02.554289 1452 log.go:172] (0xc000632000) (5) Data frame handling\nI0530 21:25:02.555778 1452 log.go:172] (0xc0003d8210) Data frame received for 1\nI0530 21:25:02.555810 1452 log.go:172] (0xc0006d1b80) (1) Data frame handling\nI0530 21:25:02.555840 1452 log.go:172] (0xc0006d1b80) (1) Data frame sent\nI0530 21:25:02.555867 1452 log.go:172] (0xc0003d8210) (0xc0006d1b80) Stream removed, broadcasting: 1\nI0530 21:25:02.555890 1452 log.go:172] (0xc0003d8210) Go away received\nI0530 21:25:02.556282 1452 log.go:172] (0xc0003d8210) (0xc0006d1b80) Stream removed, broadcasting: 1\nI0530 21:25:02.556315 1452 log.go:172] (0xc0003d8210) (0xc0006d1d60) Stream removed, broadcasting: 3\nI0530 21:25:02.556327 1452 log.go:172] (0xc0003d8210) (0xc000632000) Stream removed, broadcasting: 5\n" May 30 21:25:02.561: INFO: stdout: "" May 30 21:25:02.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8199 execpodxb249 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30474' May 30 21:25:02.776: INFO: stderr: "I0530 21:25:02.682908 1474 log.go:172] (0xc000b32420) (0xc0005ec6e0) Create stream\nI0530 21:25:02.683003 1474 log.go:172] (0xc000b32420) (0xc0005ec6e0) Stream added, broadcasting: 1\nI0530 21:25:02.686900 1474 log.go:172] (0xc000b32420) Reply frame received for 1\nI0530 21:25:02.686942 1474 log.go:172] (0xc000b32420) (0xc00063fae0) Create stream\nI0530 21:25:02.686957 1474 log.go:172] (0xc000b32420) (0xc00063fae0) Stream added, broadcasting: 3\nI0530 21:25:02.688106 1474 log.go:172] (0xc000b32420) Reply frame received for 3\nI0530 21:25:02.688160 1474 log.go:172] (0xc000b32420) (0xc000b2a1e0) Create stream\nI0530 21:25:02.688178 1474 log.go:172] (0xc000b32420) (0xc000b2a1e0) Stream added, broadcasting: 5\nI0530 21:25:02.689521 1474 log.go:172] (0xc000b32420) Reply frame received for 5\nI0530 21:25:02.769394 1474 log.go:172] (0xc000b32420) Data frame received for 3\nI0530 21:25:02.769460 1474 log.go:172] (0xc00063fae0) (3) Data frame handling\nI0530 21:25:02.769490 1474 log.go:172] (0xc000b32420) Data frame received for 5\nI0530 21:25:02.769504 1474 log.go:172] (0xc000b2a1e0) (5) Data frame handling\nI0530 21:25:02.769516 1474 log.go:172] (0xc000b2a1e0) (5) Data frame sent\nI0530 21:25:02.769527 1474 log.go:172] (0xc000b32420) Data frame received for 5\nI0530 21:25:02.769535 1474 log.go:172] (0xc000b2a1e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30474\nConnection to 172.17.0.8 30474 port [tcp/30474] succeeded!\nI0530 21:25:02.771295 1474 log.go:172] (0xc000b32420) Data frame received for 1\nI0530 21:25:02.771315 1474 log.go:172] (0xc0005ec6e0) (1) Data frame handling\nI0530 21:25:02.771330 1474 log.go:172] (0xc0005ec6e0) (1) Data frame sent\nI0530 21:25:02.771338 1474 log.go:172] (0xc000b32420) (0xc0005ec6e0) Stream removed, broadcasting: 1\nI0530 21:25:02.771499 1474 log.go:172] (0xc000b32420) Go away received\nI0530 21:25:02.771639 1474 log.go:172] (0xc000b32420) (0xc0005ec6e0) Stream removed, broadcasting: 1\nI0530 21:25:02.771660 1474 log.go:172] (0xc000b32420) (0xc00063fae0) Stream removed, broadcasting: 3\nI0530 21:25:02.771675 1474 log.go:172] (0xc000b32420) (0xc000b2a1e0) Stream removed, broadcasting: 5\n" May 30 21:25:02.776: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:25:02.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8199" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.094 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":31,"skipped":492,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:25:02.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ac004727-519d-4405-a399-353377a4467b STEP: Creating a pod to test consume secrets May 30 21:25:02.874: INFO: Waiting up to 5m0s for pod "pod-secrets-c3140bff-fc41-4897-8b48-0726f82b6f7b" in namespace "secrets-2997" to be "success or failure" May 30 21:25:02.892: INFO: Pod "pod-secrets-c3140bff-fc41-4897-8b48-0726f82b6f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.52741ms May 30 21:25:04.896: INFO: Pod "pod-secrets-c3140bff-fc41-4897-8b48-0726f82b6f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021581995s May 30 21:25:06.900: INFO: Pod "pod-secrets-c3140bff-fc41-4897-8b48-0726f82b6f7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026024189s STEP: Saw pod success May 30 21:25:06.900: INFO: Pod "pod-secrets-c3140bff-fc41-4897-8b48-0726f82b6f7b" satisfied condition "success or failure" May 30 21:25:06.903: INFO: Trying to get logs from node jerma-worker pod pod-secrets-c3140bff-fc41-4897-8b48-0726f82b6f7b container secret-env-test: STEP: delete the pod May 30 21:25:06.939: INFO: Waiting for pod pod-secrets-c3140bff-fc41-4897-8b48-0726f82b6f7b to disappear May 30 21:25:06.945: INFO: Pod pod-secrets-c3140bff-fc41-4897-8b48-0726f82b6f7b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:25:06.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2997" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":497,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:25:06.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 21:25:07.396: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 21:25:09.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470707, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470707, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470707, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470707, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 21:25:11.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470707, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470707, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470707, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726470707, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 21:25:14.441: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:25:14.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2277" for this suite. STEP: Destroying namespace "webhook-2277-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.643 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":33,"skipped":507,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:25:14.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7547 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7547;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7547 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7547;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7547.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7547.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7547.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7547.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7547.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7547.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7547.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7547.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7547.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7547.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7547.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7547.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7547.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 231.152.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.152.231_udp@PTR;check="$$(dig +tcp +noall +answer +search 231.152.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.152.231_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7547 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7547;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7547 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7547;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7547.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7547.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7547.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7547.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7547.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7547.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7547.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7547.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7547.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7547.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7547.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7547.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7547.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 231.152.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.152.231_udp@PTR;check="$$(dig +tcp +noall +answer +search 231.152.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.152.231_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 30 21:25:20.951: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:20.954: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:20.956: INFO: Unable to read wheezy_udp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:20.960: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:20.963: INFO: Unable to read wheezy_udp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:20.966: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:20.969: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:20.972: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:20.994: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:20.996: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:21.000: INFO: Unable to read jessie_udp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:21.002: INFO: Unable to read jessie_tcp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:21.009: INFO: Unable to read jessie_udp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:21.012: INFO: Unable to read jessie_tcp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:21.014: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:21.032: INFO: Lookups using dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7547 wheezy_tcp@dns-test-service.dns-7547 wheezy_udp@dns-test-service.dns-7547.svc wheezy_tcp@dns-test-service.dns-7547.svc wheezy_udp@_http._tcp.dns-test-service.dns-7547.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7547.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7547 jessie_tcp@dns-test-service.dns-7547 jessie_udp@dns-test-service.dns-7547.svc jessie_tcp@dns-test-service.dns-7547.svc jessie_udp@_http._tcp.dns-test-service.dns-7547.svc] May 30 21:25:26.039: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:26.042: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:26.057: INFO: Unable to read wheezy_udp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:26.060: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:26.062: INFO: Unable to read wheezy_udp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:26.066: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:26.115: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:26.118: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:26.121: INFO: Unable to read jessie_udp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:26.124: INFO: Unable to read jessie_tcp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:26.127: INFO: Unable to read jessie_udp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:26.130: INFO: Unable to read jessie_tcp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:26.162: INFO: Lookups using dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7547 wheezy_tcp@dns-test-service.dns-7547 wheezy_udp@dns-test-service.dns-7547.svc wheezy_tcp@dns-test-service.dns-7547.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7547 jessie_tcp@dns-test-service.dns-7547 jessie_udp@dns-test-service.dns-7547.svc jessie_tcp@dns-test-service.dns-7547.svc] May 30 21:25:31.036: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:31.040: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:31.043: INFO: Unable to read wheezy_udp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:31.047: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:31.050: INFO: Unable to read wheezy_udp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:31.054: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:31.084: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:31.088: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:31.091: INFO: Unable to read jessie_udp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:31.095: INFO: Unable to read jessie_tcp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:31.098: INFO: Unable to read jessie_udp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:31.101: INFO: Unable to read jessie_tcp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:31.129: INFO: Lookups using dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7547 wheezy_tcp@dns-test-service.dns-7547 wheezy_udp@dns-test-service.dns-7547.svc wheezy_tcp@dns-test-service.dns-7547.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7547 jessie_tcp@dns-test-service.dns-7547 jessie_udp@dns-test-service.dns-7547.svc jessie_tcp@dns-test-service.dns-7547.svc] May 30 21:25:36.050: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:36.057: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:36.069: INFO: Unable to read wheezy_udp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:36.071: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:36.074: INFO: Unable to read wheezy_udp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:36.077: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:36.102: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:36.105: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:36.108: INFO: Unable to read jessie_udp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:36.112: INFO: Unable to read jessie_tcp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:36.115: INFO: Unable to read jessie_udp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:36.118: INFO: Unable to read jessie_tcp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:36.144: INFO: Lookups using dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7547 wheezy_tcp@dns-test-service.dns-7547 wheezy_udp@dns-test-service.dns-7547.svc wheezy_tcp@dns-test-service.dns-7547.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7547 jessie_tcp@dns-test-service.dns-7547 jessie_udp@dns-test-service.dns-7547.svc jessie_tcp@dns-test-service.dns-7547.svc] May 30 21:25:41.036: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:41.040: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:41.044: INFO: Unable to read wheezy_udp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:41.047: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:41.051: INFO: Unable to read wheezy_udp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:41.054: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:41.080: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:41.083: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:41.085: INFO: Unable to read jessie_udp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:41.158: INFO: Unable to read jessie_tcp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:41.162: INFO: Unable to read jessie_udp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:41.165: INFO: Unable to read jessie_tcp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:41.191: INFO: Lookups using dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7547 wheezy_tcp@dns-test-service.dns-7547 wheezy_udp@dns-test-service.dns-7547.svc wheezy_tcp@dns-test-service.dns-7547.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7547 jessie_tcp@dns-test-service.dns-7547 jessie_udp@dns-test-service.dns-7547.svc jessie_tcp@dns-test-service.dns-7547.svc] May 30 21:25:46.036: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:46.038: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:46.041: INFO: Unable to read wheezy_udp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:46.067: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:46.070: INFO: Unable to read wheezy_udp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:46.074: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:46.102: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:46.105: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:46.108: INFO: Unable to read jessie_udp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:46.111: INFO: Unable to read jessie_tcp@dns-test-service.dns-7547 from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:46.114: INFO: Unable to read jessie_udp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:46.117: INFO: Unable to read jessie_tcp@dns-test-service.dns-7547.svc from pod dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9: the server could not find the requested resource (get pods dns-test-759ae629-0323-4657-915c-c180603161b9) May 30 21:25:46.137: INFO: Lookups using dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7547 wheezy_tcp@dns-test-service.dns-7547 wheezy_udp@dns-test-service.dns-7547.svc wheezy_tcp@dns-test-service.dns-7547.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7547 jessie_tcp@dns-test-service.dns-7547 jessie_udp@dns-test-service.dns-7547.svc jessie_tcp@dns-test-service.dns-7547.svc] May 30 21:25:51.196: INFO: DNS probes using dns-7547/dns-test-759ae629-0323-4657-915c-c180603161b9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:25:52.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7547" for this suite. • [SLOW TEST:37.414 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":34,"skipped":532,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:25:52.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-322 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-322 STEP: creating replication controller externalsvc in namespace services-322 I0530 21:25:52.264402 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-322, replica count: 2 I0530 21:25:55.314931 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 21:25:58.315178 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 30 21:25:58.372: INFO: Creating new exec pod May 30 21:26:02.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-322 execpodxnppq -- /bin/sh -x -c nslookup clusterip-service' May 30 21:26:02.811: INFO: stderr: "I0530 21:26:02.544744 1494 log.go:172] (0xc000940000) (0xc000a8c000) Create stream\nI0530 21:26:02.544811 1494 log.go:172] (0xc000940000) (0xc000a8c000) Stream added, broadcasting: 1\nI0530 21:26:02.546829 1494 log.go:172] (0xc000940000) Reply frame received for 1\nI0530 21:26:02.546858 1494 log.go:172] (0xc000940000) (0xc000717a40) Create stream\nI0530 21:26:02.546865 1494 log.go:172] (0xc000940000) (0xc000717a40) Stream added, broadcasting: 3\nI0530 21:26:02.547630 1494 log.go:172] (0xc000940000) Reply frame received for 3\nI0530 21:26:02.547659 1494 log.go:172] (0xc000940000) (0xc000717c20) Create stream\nI0530 21:26:02.547667 1494 log.go:172] (0xc000940000) (0xc000717c20) Stream added, broadcasting: 5\nI0530 21:26:02.548873 1494 log.go:172] (0xc000940000) Reply frame received for 5\nI0530 21:26:02.634358 1494 log.go:172] (0xc000940000) Data frame received for 5\nI0530 21:26:02.634382 1494 log.go:172] (0xc000717c20) (5) Data frame handling\nI0530 21:26:02.634398 1494 log.go:172] (0xc000717c20) (5) Data frame sent\n+ nslookup clusterip-service\nI0530 21:26:02.801599 1494 log.go:172] (0xc000940000) Data frame received for 3\nI0530 21:26:02.801634 1494 log.go:172] (0xc000717a40) (3) Data frame handling\nI0530 21:26:02.801647 1494 log.go:172] (0xc000717a40) (3) Data frame sent\nI0530 21:26:02.802632 1494 log.go:172] (0xc000940000) Data frame received for 3\nI0530 21:26:02.802646 1494 log.go:172] (0xc000717a40) (3) Data frame handling\nI0530 21:26:02.802658 1494 log.go:172] (0xc000717a40) (3) Data frame sent\nI0530 21:26:02.803066 1494 log.go:172] (0xc000940000) Data frame received for 3\nI0530 21:26:02.803088 1494 log.go:172] (0xc000717a40) (3) Data frame handling\nI0530 21:26:02.803216 1494 log.go:172] (0xc000940000) Data frame received for 5\nI0530 21:26:02.803236 1494 log.go:172] (0xc000717c20) (5) Data frame handling\nI0530 21:26:02.804831 1494 log.go:172] (0xc000940000) Data frame received for 1\nI0530 21:26:02.804851 1494 log.go:172] (0xc000a8c000) (1) Data frame handling\nI0530 21:26:02.804867 1494 log.go:172] (0xc000a8c000) (1) Data frame sent\nI0530 21:26:02.805036 1494 log.go:172] (0xc000940000) (0xc000a8c000) Stream removed, broadcasting: 1\nI0530 21:26:02.805067 1494 log.go:172] (0xc000940000) Go away received\nI0530 21:26:02.805660 1494 log.go:172] (0xc000940000) (0xc000a8c000) Stream removed, broadcasting: 1\nI0530 21:26:02.805684 1494 log.go:172] (0xc000940000) (0xc000717a40) Stream removed, broadcasting: 3\nI0530 21:26:02.805711 1494 log.go:172] (0xc000940000) (0xc000717c20) Stream removed, broadcasting: 5\n" May 30 21:26:02.811: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-322.svc.cluster.local\tcanonical name = externalsvc.services-322.svc.cluster.local.\nName:\texternalsvc.services-322.svc.cluster.local\nAddress: 10.99.0.5\n\n" STEP: deleting ReplicationController externalsvc in namespace services-322, will wait for the garbage collector to delete the pods May 30 21:26:02.871: INFO: Deleting ReplicationController externalsvc took: 6.570735ms May 30 21:26:04.971: INFO: Terminating ReplicationController externalsvc pods took: 2.100192821s May 30 21:26:09.714: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:26:09.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-322" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:17.735 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":35,"skipped":583,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:26:09.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:26:09.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-603" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":36,"skipped":593,"failed":0} SS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:26:09.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:26:09.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5959" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":37,"skipped":595,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:26:10.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 30 21:26:10.104: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9527d07a-a59e-4b4e-b697-a9061fc50988" in namespace "downward-api-4638" to be "success or failure" May 30 21:26:10.140: INFO: Pod "downwardapi-volume-9527d07a-a59e-4b4e-b697-a9061fc50988": Phase="Pending", Reason="", readiness=false. Elapsed: 36.089545ms May 30 21:26:12.325: INFO: Pod "downwardapi-volume-9527d07a-a59e-4b4e-b697-a9061fc50988": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221357519s May 30 21:26:14.329: INFO: Pod "downwardapi-volume-9527d07a-a59e-4b4e-b697-a9061fc50988": Phase="Running", Reason="", readiness=true. Elapsed: 4.225105099s May 30 21:26:16.333: INFO: Pod "downwardapi-volume-9527d07a-a59e-4b4e-b697-a9061fc50988": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.229056299s STEP: Saw pod success May 30 21:26:16.333: INFO: Pod "downwardapi-volume-9527d07a-a59e-4b4e-b697-a9061fc50988" satisfied condition "success or failure" May 30 21:26:16.336: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9527d07a-a59e-4b4e-b697-a9061fc50988 container client-container: STEP: delete the pod May 30 21:26:16.403: INFO: Waiting for pod downwardapi-volume-9527d07a-a59e-4b4e-b697-a9061fc50988 to disappear May 30 21:26:16.451: INFO: Pod downwardapi-volume-9527d07a-a59e-4b4e-b697-a9061fc50988 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:26:16.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4638" for this suite. • [SLOW TEST:6.452 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":609,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:26:16.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:26:29.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9073" for this suite. • [SLOW TEST:13.360 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":39,"skipped":609,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:26:29.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 30 21:26:29.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3277' May 30 21:26:30.013: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 30 21:26:30.013: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 May 30 21:26:30.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-3277' May 30 21:26:30.160: INFO: stderr: "" May 30 21:26:30.160: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:26:30.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3277" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":40,"skipped":622,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:26:30.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-sr29 STEP: Creating a pod to test atomic-volume-subpath May 30 21:26:30.293: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-sr29" in namespace "subpath-544" to be "success or failure" May 30 21:26:30.329: INFO: Pod "pod-subpath-test-downwardapi-sr29": Phase="Pending", Reason="", readiness=false. Elapsed: 36.207396ms May 30 21:26:32.494: INFO: Pod "pod-subpath-test-downwardapi-sr29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201143293s May 30 21:26:34.498: INFO: Pod "pod-subpath-test-downwardapi-sr29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205144529s May 30 21:26:36.502: INFO: Pod "pod-subpath-test-downwardapi-sr29": Phase="Running", Reason="", readiness=true. Elapsed: 6.208841283s May 30 21:26:38.506: INFO: Pod "pod-subpath-test-downwardapi-sr29": Phase="Running", Reason="", readiness=true. Elapsed: 8.213092533s May 30 21:26:40.578: INFO: Pod "pod-subpath-test-downwardapi-sr29": Phase="Running", Reason="", readiness=true. Elapsed: 10.284499128s May 30 21:26:42.581: INFO: Pod "pod-subpath-test-downwardapi-sr29": Phase="Running", Reason="", readiness=true. Elapsed: 12.288436316s May 30 21:26:44.585: INFO: Pod "pod-subpath-test-downwardapi-sr29": Phase="Running", Reason="", readiness=true. Elapsed: 14.292143593s May 30 21:26:46.590: INFO: Pod "pod-subpath-test-downwardapi-sr29": Phase="Running", Reason="", readiness=true. Elapsed: 16.296611771s May 30 21:26:48.596: INFO: Pod "pod-subpath-test-downwardapi-sr29": Phase="Running", Reason="", readiness=true. Elapsed: 18.302707649s May 30 21:26:50.599: INFO: Pod "pod-subpath-test-downwardapi-sr29": Phase="Running", Reason="", readiness=true. Elapsed: 20.306445455s May 30 21:26:52.603: INFO: Pod "pod-subpath-test-downwardapi-sr29": Phase="Running", Reason="", readiness=true. Elapsed: 22.310368684s May 30 21:26:54.607: INFO: Pod "pod-subpath-test-downwardapi-sr29": Phase="Running", Reason="", readiness=true. Elapsed: 24.314223869s May 30 21:26:56.611: INFO: Pod "pod-subpath-test-downwardapi-sr29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.318330471s STEP: Saw pod success May 30 21:26:56.611: INFO: Pod "pod-subpath-test-downwardapi-sr29" satisfied condition "success or failure" May 30 21:26:56.614: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-sr29 container test-container-subpath-downwardapi-sr29: STEP: delete the pod May 30 21:26:56.675: INFO: Waiting for pod pod-subpath-test-downwardapi-sr29 to disappear May 30 21:26:56.702: INFO: Pod pod-subpath-test-downwardapi-sr29 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-sr29 May 30 21:26:56.702: INFO: Deleting pod "pod-subpath-test-downwardapi-sr29" in namespace "subpath-544" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:26:56.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-544" for this suite. • [SLOW TEST:26.509 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":41,"skipped":646,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:26:56.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-a5199242-cbb5-463f-bed7-dca71485816a STEP: Creating a pod to test consume secrets May 30 21:26:56.831: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ec4e8c09-9041-43a8-b132-caad34a102ba" in namespace "projected-5189" to be "success or failure" May 30 21:26:56.835: INFO: Pod "pod-projected-secrets-ec4e8c09-9041-43a8-b132-caad34a102ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106819ms May 30 21:26:58.839: INFO: Pod "pod-projected-secrets-ec4e8c09-9041-43a8-b132-caad34a102ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008831158s May 30 21:27:00.844: INFO: Pod "pod-projected-secrets-ec4e8c09-9041-43a8-b132-caad34a102ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013503983s STEP: Saw pod success May 30 21:27:00.844: INFO: Pod "pod-projected-secrets-ec4e8c09-9041-43a8-b132-caad34a102ba" satisfied condition "success or failure" May 30 21:27:00.847: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-ec4e8c09-9041-43a8-b132-caad34a102ba container projected-secret-volume-test: STEP: delete the pod May 30 21:27:00.901: INFO: Waiting for pod pod-projected-secrets-ec4e8c09-9041-43a8-b132-caad34a102ba to disappear May 30 21:27:00.915: INFO: Pod pod-projected-secrets-ec4e8c09-9041-43a8-b132-caad34a102ba no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:27:00.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5189" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":646,"failed":0} ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:27:00.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-3df3a3f9-4af7-4733-8e24-c74db6d103e9 in namespace container-probe-21 May 30 21:27:05.105: INFO: Started pod test-webserver-3df3a3f9-4af7-4733-8e24-c74db6d103e9 in namespace container-probe-21 STEP: checking the pod's current state and verifying that restartCount is present May 30 21:27:05.110: INFO: Initial restart count of pod test-webserver-3df3a3f9-4af7-4733-8e24-c74db6d103e9 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:31:05.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-21" for this suite. • [SLOW TEST:245.103 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":646,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:31:06.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:31:10.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-375" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":661,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:31:10.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:31:14.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4947" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":684,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:31:14.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 30 21:31:14.904: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 30 21:31:14.936: INFO: Waiting for terminating namespaces to be deleted... May 30 21:31:14.939: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 30 21:31:14.944: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 21:31:14.945: INFO: Container kindnet-cni ready: true, restart count 2 May 30 21:31:14.945: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 21:31:14.945: INFO: Container kube-proxy ready: true, restart count 0 May 30 21:31:14.945: INFO: client-containers-3c766b9e-f291-4d13-8a74-cdbe2e583148 from containers-375 started at 2020-05-30 21:31:06 +0000 UTC (1 container statuses recorded) May 30 21:31:14.945: INFO: Container test-container ready: true, restart count 0 May 30 21:31:14.945: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 30 21:31:14.966: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 21:31:14.966: INFO: Container kube-proxy ready: true, restart count 0 May 30 21:31:14.966: INFO: bin-falsea09c40a1-15e2-4eb2-8a00-1950cd569ace from kubelet-test-4947 started at 2020-05-30 21:31:10 +0000 UTC (1 container statuses recorded) May 30 21:31:14.966: INFO: Container bin-falsea09c40a1-15e2-4eb2-8a00-1950cd569ace ready: false, restart count 0 May 30 21:31:14.966: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 30 21:31:14.966: INFO: Container kube-hunter ready: false, restart count 0 May 30 21:31:14.966: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 21:31:14.966: INFO: Container kindnet-cni ready: true, restart count 2 May 30 21:31:14.966: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 30 21:31:14.966: INFO: Container kube-bench ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1613ebb96ac7e182], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:31:16.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5267" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":46,"skipped":690,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:31:16.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 30 21:31:26.338: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8099 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 21:31:26.338: INFO: >>> kubeConfig: /root/.kube/config I0530 21:31:26.375111 6 log.go:172] (0xc0015662c0) (0xc0013a1900) Create stream I0530 21:31:26.375140 6 log.go:172] (0xc0015662c0) (0xc0013a1900) Stream added, broadcasting: 1 I0530 21:31:26.379020 6 log.go:172] (0xc0015662c0) Reply frame received for 1 I0530 21:31:26.379075 6 log.go:172] (0xc0015662c0) (0xc001100140) Create stream I0530 21:31:26.379093 6 log.go:172] (0xc0015662c0) (0xc001100140) Stream added, broadcasting: 3 I0530 21:31:26.380201 6 log.go:172] (0xc0015662c0) Reply frame received for 3 I0530 21:31:26.380231 6 log.go:172] (0xc0015662c0) (0xc001100280) Create stream I0530 21:31:26.380243 6 log.go:172] (0xc0015662c0) (0xc001100280) Stream added, broadcasting: 5 I0530 21:31:26.381491 6 log.go:172] (0xc0015662c0) Reply frame received for 5 I0530 21:31:26.438232 6 log.go:172] (0xc0015662c0) Data frame received for 5 I0530 21:31:26.438277 6 log.go:172] (0xc001100280) (5) Data frame handling I0530 21:31:26.438306 6 log.go:172] (0xc0015662c0) Data frame received for 3 I0530 21:31:26.438317 6 log.go:172] (0xc001100140) (3) Data frame handling I0530 21:31:26.438326 6 log.go:172] (0xc001100140) (3) Data frame sent I0530 21:31:26.438341 6 log.go:172] (0xc0015662c0) Data frame received for 3 I0530 21:31:26.438351 6 log.go:172] (0xc001100140) (3) Data frame handling I0530 21:31:26.439856 6 log.go:172] (0xc0015662c0) Data frame received for 1 I0530 21:31:26.439915 6 log.go:172] (0xc0013a1900) (1) Data frame handling I0530 21:31:26.439945 6 log.go:172] (0xc0013a1900) (1) Data frame sent I0530 21:31:26.439964 6 log.go:172] (0xc0015662c0) (0xc0013a1900) Stream removed, broadcasting: 1 I0530 21:31:26.439983 6 log.go:172] (0xc0015662c0) Go away received I0530 21:31:26.440311 6 log.go:172] (0xc0015662c0) (0xc0013a1900) Stream removed, broadcasting: 1 I0530 21:31:26.440340 6 log.go:172] (0xc0015662c0) (0xc001100140) Stream removed, broadcasting: 3 I0530 21:31:26.440357 6 log.go:172] (0xc0015662c0) (0xc001100280) Stream removed, broadcasting: 5 May 30 21:31:26.440: INFO: Exec stderr: "" May 30 21:31:26.440: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8099 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 21:31:26.440: INFO: >>> kubeConfig: /root/.kube/config I0530 21:31:26.471268 6 log.go:172] (0xc0015668f0) (0xc002306000) Create stream I0530 21:31:26.471292 6 log.go:172] (0xc0015668f0) (0xc002306000) Stream added, broadcasting: 1 I0530 21:31:26.474540 6 log.go:172] (0xc0015668f0) Reply frame received for 1 I0530 21:31:26.474591 6 log.go:172] (0xc0015668f0) (0xc001100320) Create stream I0530 21:31:26.474609 6 log.go:172] (0xc0015668f0) (0xc001100320) Stream added, broadcasting: 3 I0530 21:31:26.475724 6 log.go:172] (0xc0015668f0) Reply frame received for 3 I0530 21:31:26.475771 6 log.go:172] (0xc0015668f0) (0xc002351860) Create stream I0530 21:31:26.475929 6 log.go:172] (0xc0015668f0) (0xc002351860) Stream added, broadcasting: 5 I0530 21:31:26.476832 6 log.go:172] (0xc0015668f0) Reply frame received for 5 I0530 21:31:26.542832 6 log.go:172] (0xc0015668f0) Data frame received for 5 I0530 21:31:26.542874 6 log.go:172] (0xc002351860) (5) Data frame handling I0530 21:31:26.542917 6 log.go:172] (0xc0015668f0) Data frame received for 3 I0530 21:31:26.542947 6 log.go:172] (0xc001100320) (3) Data frame handling I0530 21:31:26.542975 6 log.go:172] (0xc001100320) (3) Data frame sent I0530 21:31:26.542997 6 log.go:172] (0xc0015668f0) Data frame received for 3 I0530 21:31:26.543012 6 log.go:172] (0xc001100320) (3) Data frame handling I0530 21:31:26.544313 6 log.go:172] (0xc0015668f0) Data frame received for 1 I0530 21:31:26.544334 6 log.go:172] (0xc002306000) (1) Data frame handling I0530 21:31:26.544366 6 log.go:172] (0xc002306000) (1) Data frame sent I0530 21:31:26.544420 6 log.go:172] (0xc0015668f0) (0xc002306000) Stream removed, broadcasting: 1 I0530 21:31:26.544560 6 log.go:172] (0xc0015668f0) (0xc002306000) Stream removed, broadcasting: 1 I0530 21:31:26.544578 6 log.go:172] (0xc0015668f0) (0xc001100320) Stream removed, broadcasting: 3 I0530 21:31:26.544735 6 log.go:172] (0xc0015668f0) (0xc002351860) Stream removed, broadcasting: 5 I0530 21:31:26.544892 6 log.go:172] (0xc0015668f0) Go away received May 30 21:31:26.544: INFO: Exec stderr: "" May 30 21:31:26.545: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8099 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 21:31:26.545: INFO: >>> kubeConfig: /root/.kube/config I0530 21:31:26.581745 6 log.go:172] (0xc0017b2370) (0xc0011008c0) Create stream I0530 21:31:26.581781 6 log.go:172] (0xc0017b2370) (0xc0011008c0) Stream added, broadcasting: 1 I0530 21:31:26.584708 6 log.go:172] (0xc0017b2370) Reply frame received for 1 I0530 21:31:26.584749 6 log.go:172] (0xc0017b2370) (0xc0023060a0) Create stream I0530 21:31:26.584762 6 log.go:172] (0xc0017b2370) (0xc0023060a0) Stream added, broadcasting: 3 I0530 21:31:26.585837 6 log.go:172] (0xc0017b2370) Reply frame received for 3 I0530 21:31:26.585877 6 log.go:172] (0xc0017b2370) (0xc001ce63c0) Create stream I0530 21:31:26.585890 6 log.go:172] (0xc0017b2370) (0xc001ce63c0) Stream added, broadcasting: 5 I0530 21:31:26.586941 6 log.go:172] (0xc0017b2370) Reply frame received for 5 I0530 21:31:26.655079 6 log.go:172] (0xc0017b2370) Data frame received for 5 I0530 21:31:26.655123 6 log.go:172] (0xc001ce63c0) (5) Data frame handling I0530 21:31:26.655164 6 log.go:172] (0xc0017b2370) Data frame received for 3 I0530 21:31:26.655192 6 log.go:172] (0xc0023060a0) (3) Data frame handling I0530 21:31:26.655218 6 log.go:172] (0xc0023060a0) (3) Data frame sent I0530 21:31:26.655234 6 log.go:172] (0xc0017b2370) Data frame received for 3 I0530 21:31:26.655242 6 log.go:172] (0xc0023060a0) (3) Data frame handling I0530 21:31:26.656371 6 log.go:172] (0xc0017b2370) Data frame received for 1 I0530 21:31:26.656394 6 log.go:172] (0xc0011008c0) (1) Data frame handling I0530 21:31:26.656420 6 log.go:172] (0xc0011008c0) (1) Data frame sent I0530 21:31:26.656441 6 log.go:172] (0xc0017b2370) (0xc0011008c0) Stream removed, broadcasting: 1 I0530 21:31:26.656468 6 log.go:172] (0xc0017b2370) Go away received I0530 21:31:26.656575 6 log.go:172] (0xc0017b2370) (0xc0011008c0) Stream removed, broadcasting: 1 I0530 21:31:26.656598 6 log.go:172] (0xc0017b2370) (0xc0023060a0) Stream removed, broadcasting: 3 I0530 21:31:26.656609 6 log.go:172] (0xc0017b2370) (0xc001ce63c0) Stream removed, broadcasting: 5 May 30 21:31:26.656: INFO: Exec stderr: "" May 30 21:31:26.656: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8099 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 21:31:26.656: INFO: >>> kubeConfig: /root/.kube/config I0530 21:31:26.688527 6 log.go:172] (0xc001566f20) (0xc002306320) Create stream I0530 21:31:26.688557 6 log.go:172] (0xc001566f20) (0xc002306320) Stream added, broadcasting: 1 I0530 21:31:26.690668 6 log.go:172] (0xc001566f20) Reply frame received for 1 I0530 21:31:26.690711 6 log.go:172] (0xc001566f20) (0xc00135d9a0) Create stream I0530 21:31:26.690724 6 log.go:172] (0xc001566f20) (0xc00135d9a0) Stream added, broadcasting: 3 I0530 21:31:26.691513 6 log.go:172] (0xc001566f20) Reply frame received for 3 I0530 21:31:26.691549 6 log.go:172] (0xc001566f20) (0xc00135da40) Create stream I0530 21:31:26.691561 6 log.go:172] (0xc001566f20) (0xc00135da40) Stream added, broadcasting: 5 I0530 21:31:26.692355 6 log.go:172] (0xc001566f20) Reply frame received for 5 I0530 21:31:26.760654 6 log.go:172] (0xc001566f20) Data frame received for 3 I0530 21:31:26.760681 6 log.go:172] (0xc00135d9a0) (3) Data frame handling I0530 21:31:26.760688 6 log.go:172] (0xc00135d9a0) (3) Data frame sent I0530 21:31:26.760693 6 log.go:172] (0xc001566f20) Data frame received for 3 I0530 21:31:26.760699 6 log.go:172] (0xc00135d9a0) (3) Data frame handling I0530 21:31:26.760714 6 log.go:172] (0xc001566f20) Data frame received for 5 I0530 21:31:26.760721 6 log.go:172] (0xc00135da40) (5) Data frame handling I0530 21:31:26.762336 6 log.go:172] (0xc001566f20) Data frame received for 1 I0530 21:31:26.762374 6 log.go:172] (0xc002306320) (1) Data frame handling I0530 21:31:26.762403 6 log.go:172] (0xc002306320) (1) Data frame sent I0530 21:31:26.762439 6 log.go:172] (0xc001566f20) (0xc002306320) Stream removed, broadcasting: 1 I0530 21:31:26.762477 6 log.go:172] (0xc001566f20) Go away received I0530 21:31:26.762626 6 log.go:172] (0xc001566f20) (0xc002306320) Stream removed, broadcasting: 1 I0530 21:31:26.762651 6 log.go:172] (0xc001566f20) (0xc00135d9a0) Stream removed, broadcasting: 3 I0530 21:31:26.762668 6 log.go:172] (0xc001566f20) (0xc00135da40) Stream removed, broadcasting: 5 May 30 21:31:26.762: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 30 21:31:26.762: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8099 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 21:31:26.762: INFO: >>> kubeConfig: /root/.kube/config I0530 21:31:26.796923 6 log.go:172] (0xc0017b29a0) (0xc001100b40) Create stream I0530 21:31:26.796969 6 log.go:172] (0xc0017b29a0) (0xc001100b40) Stream added, broadcasting: 1 I0530 21:31:26.799560 6 log.go:172] (0xc0017b29a0) Reply frame received for 1 I0530 21:31:26.799614 6 log.go:172] (0xc0017b29a0) (0xc00135dae0) Create stream I0530 21:31:26.799631 6 log.go:172] (0xc0017b29a0) (0xc00135dae0) Stream added, broadcasting: 3 I0530 21:31:26.800701 6 log.go:172] (0xc0017b29a0) Reply frame received for 3 I0530 21:31:26.800752 6 log.go:172] (0xc0017b29a0) (0xc0023063c0) Create stream I0530 21:31:26.800768 6 log.go:172] (0xc0017b29a0) (0xc0023063c0) Stream added, broadcasting: 5 I0530 21:31:26.802411 6 log.go:172] (0xc0017b29a0) Reply frame received for 5 I0530 21:31:26.861062 6 log.go:172] (0xc0017b29a0) Data frame received for 5 I0530 21:31:26.861096 6 log.go:172] (0xc0017b29a0) Data frame received for 3 I0530 21:31:26.861329 6 log.go:172] (0xc00135dae0) (3) Data frame handling I0530 21:31:26.861348 6 log.go:172] (0xc00135dae0) (3) Data frame sent I0530 21:31:26.861358 6 log.go:172] (0xc0017b29a0) Data frame received for 3 I0530 21:31:26.861379 6 log.go:172] (0xc00135dae0) (3) Data frame handling I0530 21:31:26.861453 6 log.go:172] (0xc0023063c0) (5) Data frame handling I0530 21:31:26.862738 6 log.go:172] (0xc0017b29a0) Data frame received for 1 I0530 21:31:26.862756 6 log.go:172] (0xc001100b40) (1) Data frame handling I0530 21:31:26.862765 6 log.go:172] (0xc001100b40) (1) Data frame sent I0530 21:31:26.862778 6 log.go:172] (0xc0017b29a0) (0xc001100b40) Stream removed, broadcasting: 1 I0530 21:31:26.862820 6 log.go:172] (0xc0017b29a0) Go away received I0530 21:31:26.862881 6 log.go:172] (0xc0017b29a0) (0xc001100b40) Stream removed, broadcasting: 1 I0530 21:31:26.862896 6 log.go:172] (0xc0017b29a0) (0xc00135dae0) Stream removed, broadcasting: 3 I0530 21:31:26.862904 6 log.go:172] (0xc0017b29a0) (0xc0023063c0) Stream removed, broadcasting: 5 May 30 21:31:26.862: INFO: Exec stderr: "" May 30 21:31:26.862: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8099 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 21:31:26.862: INFO: >>> kubeConfig: /root/.kube/config I0530 21:31:26.896237 6 log.go:172] (0xc001567550) (0xc002306640) Create stream I0530 21:31:26.896276 6 log.go:172] (0xc001567550) (0xc002306640) Stream added, broadcasting: 1 I0530 21:31:26.899281 6 log.go:172] (0xc001567550) Reply frame received for 1 I0530 21:31:26.899312 6 log.go:172] (0xc001567550) (0xc001100be0) Create stream I0530 21:31:26.899321 6 log.go:172] (0xc001567550) (0xc001100be0) Stream added, broadcasting: 3 I0530 21:31:26.900333 6 log.go:172] (0xc001567550) Reply frame received for 3 I0530 21:31:26.900374 6 log.go:172] (0xc001567550) (0xc002351900) Create stream I0530 21:31:26.900389 6 log.go:172] (0xc001567550) (0xc002351900) Stream added, broadcasting: 5 I0530 21:31:26.901696 6 log.go:172] (0xc001567550) Reply frame received for 5 I0530 21:31:26.973586 6 log.go:172] (0xc001567550) Data frame received for 5 I0530 21:31:26.973625 6 log.go:172] (0xc002351900) (5) Data frame handling I0530 21:31:26.973667 6 log.go:172] (0xc001567550) Data frame received for 3 I0530 21:31:26.973680 6 log.go:172] (0xc001100be0) (3) Data frame handling I0530 21:31:26.973689 6 log.go:172] (0xc001100be0) (3) Data frame sent I0530 21:31:26.973696 6 log.go:172] (0xc001567550) Data frame received for 3 I0530 21:31:26.973708 6 log.go:172] (0xc001100be0) (3) Data frame handling I0530 21:31:26.974681 6 log.go:172] (0xc001567550) Data frame received for 1 I0530 21:31:26.974705 6 log.go:172] (0xc002306640) (1) Data frame handling I0530 21:31:26.974725 6 log.go:172] (0xc002306640) (1) Data frame sent I0530 21:31:26.974743 6 log.go:172] (0xc001567550) (0xc002306640) Stream removed, broadcasting: 1 I0530 21:31:26.974765 6 log.go:172] (0xc001567550) Go away received I0530 21:31:26.974845 6 log.go:172] (0xc001567550) (0xc002306640) Stream removed, broadcasting: 1 I0530 21:31:26.974861 6 log.go:172] (0xc001567550) (0xc001100be0) Stream removed, broadcasting: 3 I0530 21:31:26.974869 6 log.go:172] (0xc001567550) (0xc002351900) Stream removed, broadcasting: 5 May 30 21:31:26.974: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 30 21:31:26.974: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8099 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 21:31:26.974: INFO: >>> kubeConfig: /root/.kube/config I0530 21:31:27.000960 6 log.go:172] (0xc00293c580) (0xc002351cc0) Create stream I0530 21:31:27.000986 6 log.go:172] (0xc00293c580) (0xc002351cc0) Stream added, broadcasting: 1 I0530 21:31:27.003707 6 log.go:172] (0xc00293c580) Reply frame received for 1 I0530 21:31:27.003759 6 log.go:172] (0xc00293c580) (0xc00135dc20) Create stream I0530 21:31:27.003773 6 log.go:172] (0xc00293c580) (0xc00135dc20) Stream added, broadcasting: 3 I0530 21:31:27.004776 6 log.go:172] (0xc00293c580) Reply frame received for 3 I0530 21:31:27.004824 6 log.go:172] (0xc00293c580) (0xc0023066e0) Create stream I0530 21:31:27.004841 6 log.go:172] (0xc00293c580) (0xc0023066e0) Stream added, broadcasting: 5 I0530 21:31:27.006116 6 log.go:172] (0xc00293c580) Reply frame received for 5 I0530 21:31:27.068563 6 log.go:172] (0xc00293c580) Data frame received for 3 I0530 21:31:27.068585 6 log.go:172] (0xc00135dc20) (3) Data frame handling I0530 21:31:27.068597 6 log.go:172] (0xc00135dc20) (3) Data frame sent I0530 21:31:27.073989 6 log.go:172] (0xc00293c580) Data frame received for 3 I0530 21:31:27.074017 6 log.go:172] (0xc00135dc20) (3) Data frame handling I0530 21:31:27.074038 6 log.go:172] (0xc00293c580) Data frame received for 5 I0530 21:31:27.074052 6 log.go:172] (0xc0023066e0) (5) Data frame handling I0530 21:31:27.075852 6 log.go:172] (0xc00293c580) Data frame received for 1 I0530 21:31:27.075894 6 log.go:172] (0xc002351cc0) (1) Data frame handling I0530 21:31:27.075917 6 log.go:172] (0xc002351cc0) (1) Data frame sent I0530 21:31:27.075941 6 log.go:172] (0xc00293c580) (0xc002351cc0) Stream removed, broadcasting: 1 I0530 21:31:27.076069 6 log.go:172] (0xc00293c580) (0xc002351cc0) Stream removed, broadcasting: 1 I0530 21:31:27.076093 6 log.go:172] (0xc00293c580) (0xc00135dc20) Stream removed, broadcasting: 3 I0530 21:31:27.076113 6 log.go:172] (0xc00293c580) (0xc0023066e0) Stream removed, broadcasting: 5 May 30 21:31:27.076: INFO: Exec stderr: "" May 30 21:31:27.076: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8099 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 21:31:27.076: INFO: >>> kubeConfig: /root/.kube/config I0530 21:31:27.078695 6 log.go:172] (0xc00293c580) Go away received I0530 21:31:27.101726 6 log.go:172] (0xc002f5cf20) (0xc0027a21e0) Create stream I0530 21:31:27.101763 6 log.go:172] (0xc002f5cf20) (0xc0027a21e0) Stream added, broadcasting: 1 I0530 21:31:27.103776 6 log.go:172] (0xc002f5cf20) Reply frame received for 1 I0530 21:31:27.103816 6 log.go:172] (0xc002f5cf20) (0xc002351d60) Create stream I0530 21:31:27.103826 6 log.go:172] (0xc002f5cf20) (0xc002351d60) Stream added, broadcasting: 3 I0530 21:31:27.104451 6 log.go:172] (0xc002f5cf20) Reply frame received for 3 I0530 21:31:27.104475 6 log.go:172] (0xc002f5cf20) (0xc0027a2280) Create stream I0530 21:31:27.104484 6 log.go:172] (0xc002f5cf20) (0xc0027a2280) Stream added, broadcasting: 5 I0530 21:31:27.105288 6 log.go:172] (0xc002f5cf20) Reply frame received for 5 I0530 21:31:27.159134 6 log.go:172] (0xc002f5cf20) Data frame received for 5 I0530 21:31:27.159183 6 log.go:172] (0xc002f5cf20) Data frame received for 3 I0530 21:31:27.159243 6 log.go:172] (0xc002351d60) (3) Data frame handling I0530 21:31:27.159283 6 log.go:172] (0xc002351d60) (3) Data frame sent I0530 21:31:27.159305 6 log.go:172] (0xc002f5cf20) Data frame received for 3 I0530 21:31:27.159316 6 log.go:172] (0xc002351d60) (3) Data frame handling I0530 21:31:27.159337 6 log.go:172] (0xc0027a2280) (5) Data frame handling I0530 21:31:27.160807 6 log.go:172] (0xc002f5cf20) Data frame received for 1 I0530 21:31:27.160834 6 log.go:172] (0xc0027a21e0) (1) Data frame handling I0530 21:31:27.160858 6 log.go:172] (0xc0027a21e0) (1) Data frame sent I0530 21:31:27.160876 6 log.go:172] (0xc002f5cf20) (0xc0027a21e0) Stream removed, broadcasting: 1 I0530 21:31:27.160993 6 log.go:172] (0xc002f5cf20) (0xc0027a21e0) Stream removed, broadcasting: 1 I0530 21:31:27.161022 6 log.go:172] (0xc002f5cf20) (0xc002351d60) Stream removed, broadcasting: 3 I0530 21:31:27.161322 6 log.go:172] (0xc002f5cf20) (0xc0027a2280) Stream removed, broadcasting: 5 May 30 21:31:27.161: INFO: Exec stderr: "" May 30 21:31:27.161: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8099 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 21:31:27.161: INFO: >>> kubeConfig: /root/.kube/config I0530 21:31:27.161724 6 log.go:172] (0xc002f5cf20) Go away received I0530 21:31:27.197767 6 log.go:172] (0xc002f5d550) (0xc0027a2500) Create stream I0530 21:31:27.197804 6 log.go:172] (0xc002f5d550) (0xc0027a2500) Stream added, broadcasting: 1 I0530 21:31:27.200323 6 log.go:172] (0xc002f5d550) Reply frame received for 1 I0530 21:31:27.200370 6 log.go:172] (0xc002f5d550) (0xc0027a25a0) Create stream I0530 21:31:27.200386 6 log.go:172] (0xc002f5d550) (0xc0027a25a0) Stream added, broadcasting: 3 I0530 21:31:27.201801 6 log.go:172] (0xc002f5d550) Reply frame received for 3 I0530 21:31:27.201835 6 log.go:172] (0xc002f5d550) (0xc001ce6460) Create stream I0530 21:31:27.201847 6 log.go:172] (0xc002f5d550) (0xc001ce6460) Stream added, broadcasting: 5 I0530 21:31:27.202653 6 log.go:172] (0xc002f5d550) Reply frame received for 5 I0530 21:31:27.267576 6 log.go:172] (0xc002f5d550) Data frame received for 5 I0530 21:31:27.267611 6 log.go:172] (0xc001ce6460) (5) Data frame handling I0530 21:31:27.267632 6 log.go:172] (0xc002f5d550) Data frame received for 3 I0530 21:31:27.267645 6 log.go:172] (0xc0027a25a0) (3) Data frame handling I0530 21:31:27.267669 6 log.go:172] (0xc0027a25a0) (3) Data frame sent I0530 21:31:27.267676 6 log.go:172] (0xc002f5d550) Data frame received for 3 I0530 21:31:27.267681 6 log.go:172] (0xc0027a25a0) (3) Data frame handling I0530 21:31:27.268762 6 log.go:172] (0xc002f5d550) Data frame received for 1 I0530 21:31:27.268776 6 log.go:172] (0xc0027a2500) (1) Data frame handling I0530 21:31:27.268785 6 log.go:172] (0xc0027a2500) (1) Data frame sent I0530 21:31:27.268798 6 log.go:172] (0xc002f5d550) (0xc0027a2500) Stream removed, broadcasting: 1 I0530 21:31:27.268810 6 log.go:172] (0xc002f5d550) Go away received I0530 21:31:27.268932 6 log.go:172] (0xc002f5d550) (0xc0027a2500) Stream removed, broadcasting: 1 I0530 21:31:27.268947 6 log.go:172] (0xc002f5d550) (0xc0027a25a0) Stream removed, broadcasting: 3 I0530 21:31:27.268954 6 log.go:172] (0xc002f5d550) (0xc001ce6460) Stream removed, broadcasting: 5 May 30 21:31:27.268: INFO: Exec stderr: "" May 30 21:31:27.268: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8099 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 21:31:27.269: INFO: >>> kubeConfig: /root/.kube/config I0530 21:31:27.301833 6 log.go:172] (0xc0017b2fd0) (0xc0011014a0) Create stream I0530 21:31:27.301868 6 log.go:172] (0xc0017b2fd0) (0xc0011014a0) Stream added, broadcasting: 1 I0530 21:31:27.305714 6 log.go:172] (0xc0017b2fd0) Reply frame received for 1 I0530 21:31:27.305779 6 log.go:172] (0xc0017b2fd0) (0xc002306780) Create stream I0530 21:31:27.305806 6 log.go:172] (0xc0017b2fd0) (0xc002306780) Stream added, broadcasting: 3 I0530 21:31:27.306934 6 log.go:172] (0xc0017b2fd0) Reply frame received for 3 I0530 21:31:27.307004 6 log.go:172] (0xc0017b2fd0) (0xc0027a2640) Create stream I0530 21:31:27.307021 6 log.go:172] (0xc0017b2fd0) (0xc0027a2640) Stream added, broadcasting: 5 I0530 21:31:27.308045 6 log.go:172] (0xc0017b2fd0) Reply frame received for 5 I0530 21:31:27.425102 6 log.go:172] (0xc0017b2fd0) Data frame received for 3 I0530 21:31:27.425313 6 log.go:172] (0xc002306780) (3) Data frame handling I0530 21:31:27.425325 6 log.go:172] (0xc002306780) (3) Data frame sent I0530 21:31:27.425333 6 log.go:172] (0xc0017b2fd0) Data frame received for 3 I0530 21:31:27.425341 6 log.go:172] (0xc002306780) (3) Data frame handling I0530 21:31:27.425391 6 log.go:172] (0xc0017b2fd0) Data frame received for 5 I0530 21:31:27.425419 6 log.go:172] (0xc0027a2640) (5) Data frame handling I0530 21:31:27.426422 6 log.go:172] (0xc0017b2fd0) Data frame received for 1 I0530 21:31:27.426468 6 log.go:172] (0xc0011014a0) (1) Data frame handling I0530 21:31:27.426508 6 log.go:172] (0xc0011014a0) (1) Data frame sent I0530 21:31:27.426526 6 log.go:172] (0xc0017b2fd0) (0xc0011014a0) Stream removed, broadcasting: 1 I0530 21:31:27.426544 6 log.go:172] (0xc0017b2fd0) Go away received I0530 21:31:27.426651 6 log.go:172] (0xc0017b2fd0) (0xc0011014a0) Stream removed, broadcasting: 1 I0530 21:31:27.426679 6 log.go:172] (0xc0017b2fd0) (0xc002306780) Stream removed, broadcasting: 3 I0530 21:31:27.426696 6 log.go:172] (0xc0017b2fd0) (0xc0027a2640) Stream removed, broadcasting: 5 May 30 21:31:27.426: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:31:27.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8099" for this suite. • [SLOW TEST:11.211 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":748,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:31:27.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-jdkw STEP: Creating a pod to test atomic-volume-subpath May 30 21:31:27.547: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jdkw" in namespace "subpath-515" to be "success or failure" May 30 21:31:27.551: INFO: Pod "pod-subpath-test-configmap-jdkw": Phase="Pending", Reason="", readiness=false. Elapsed: 3.465986ms May 30 21:31:29.954: INFO: Pod "pod-subpath-test-configmap-jdkw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.406139796s May 30 21:31:31.958: INFO: Pod "pod-subpath-test-configmap-jdkw": Phase="Running", Reason="", readiness=true. Elapsed: 4.410577377s May 30 21:31:33.963: INFO: Pod "pod-subpath-test-configmap-jdkw": Phase="Running", Reason="", readiness=true. Elapsed: 6.415358116s May 30 21:31:35.967: INFO: Pod "pod-subpath-test-configmap-jdkw": Phase="Running", Reason="", readiness=true. Elapsed: 8.419724394s May 30 21:31:37.970: INFO: Pod "pod-subpath-test-configmap-jdkw": Phase="Running", Reason="", readiness=true. Elapsed: 10.422813848s May 30 21:31:39.974: INFO: Pod "pod-subpath-test-configmap-jdkw": Phase="Running", Reason="", readiness=true. Elapsed: 12.426970671s May 30 21:31:41.979: INFO: Pod "pod-subpath-test-configmap-jdkw": Phase="Running", Reason="", readiness=true. Elapsed: 14.431591846s May 30 21:31:43.983: INFO: Pod "pod-subpath-test-configmap-jdkw": Phase="Running", Reason="", readiness=true. Elapsed: 16.435197355s May 30 21:31:45.987: INFO: Pod "pod-subpath-test-configmap-jdkw": Phase="Running", Reason="", readiness=true. Elapsed: 18.439630419s May 30 21:31:47.991: INFO: Pod "pod-subpath-test-configmap-jdkw": Phase="Running", Reason="", readiness=true. Elapsed: 20.444057519s May 30 21:31:49.996: INFO: Pod "pod-subpath-test-configmap-jdkw": Phase="Running", Reason="", readiness=true. Elapsed: 22.448357042s May 30 21:31:52.001: INFO: Pod "pod-subpath-test-configmap-jdkw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.453199646s STEP: Saw pod success May 30 21:31:52.001: INFO: Pod "pod-subpath-test-configmap-jdkw" satisfied condition "success or failure" May 30 21:31:52.004: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-jdkw container test-container-subpath-configmap-jdkw: STEP: delete the pod May 30 21:31:52.025: INFO: Waiting for pod pod-subpath-test-configmap-jdkw to disappear May 30 21:31:52.035: INFO: Pod pod-subpath-test-configmap-jdkw no longer exists STEP: Deleting pod pod-subpath-test-configmap-jdkw May 30 21:31:52.035: INFO: Deleting pod "pod-subpath-test-configmap-jdkw" in namespace "subpath-515" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:31:52.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-515" for this suite. • [SLOW TEST:24.614 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":48,"skipped":784,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:31:52.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 30 21:31:52.103: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 30 21:31:52.274: INFO: Waiting for terminating namespaces to be deleted... May 30 21:31:52.276: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 30 21:31:52.281: INFO: test-host-network-pod from e2e-kubelet-etc-hosts-8099 started at 2020-05-30 21:31:22 +0000 UTC (2 container statuses recorded) May 30 21:31:52.281: INFO: Container busybox-1 ready: true, restart count 0 May 30 21:31:52.281: INFO: Container busybox-2 ready: true, restart count 0 May 30 21:31:52.281: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 21:31:52.281: INFO: Container kindnet-cni ready: true, restart count 2 May 30 21:31:52.281: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 21:31:52.281: INFO: Container kube-proxy ready: true, restart count 0 May 30 21:31:52.281: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 30 21:31:52.287: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 21:31:52.287: INFO: Container kindnet-cni ready: true, restart count 2 May 30 21:31:52.287: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 30 21:31:52.287: INFO: Container kube-bench ready: false, restart count 0 May 30 21:31:52.288: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 21:31:52.288: INFO: Container kube-proxy ready: true, restart count 0 May 30 21:31:52.288: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 30 21:31:52.288: INFO: Container kube-hunter ready: false, restart count 0 May 30 21:31:52.288: INFO: test-pod from e2e-kubelet-etc-hosts-8099 started at 2020-05-30 21:31:16 +0000 UTC (3 container statuses recorded) May 30 21:31:52.288: INFO: Container busybox-1 ready: true, restart count 0 May 30 21:31:52.288: INFO: Container busybox-2 ready: true, restart count 0 May 30 21:31:52.288: INFO: Container busybox-3 ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7c510ffa-b4ab-4118-ac80-78ec7b3e31a7 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-7c510ffa-b4ab-4118-ac80-78ec7b3e31a7 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-7c510ffa-b4ab-4118-ac80-78ec7b3e31a7 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:32:00.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2209" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.509 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":49,"skipped":787,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:32:00.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7243 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7243 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7243 May 30 21:32:00.623: INFO: Found 0 stateful pods, waiting for 1 May 30 21:32:10.627: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 30 21:32:10.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7243 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 30 21:32:13.596: INFO: stderr: "I0530 21:32:13.451242 1556 log.go:172] (0xc000777080) (0xc0006d3ea0) Create stream\nI0530 21:32:13.451272 1556 log.go:172] (0xc000777080) (0xc0006d3ea0) Stream added, broadcasting: 1\nI0530 21:32:13.453427 1556 log.go:172] (0xc000777080) Reply frame received for 1\nI0530 21:32:13.453463 1556 log.go:172] (0xc000777080) (0xc000b8c000) Create stream\nI0530 21:32:13.453473 1556 log.go:172] (0xc000777080) (0xc000b8c000) Stream added, broadcasting: 3\nI0530 21:32:13.454586 1556 log.go:172] (0xc000777080) Reply frame received for 3\nI0530 21:32:13.454636 1556 log.go:172] (0xc000777080) (0xc0007fe000) Create stream\nI0530 21:32:13.454651 1556 log.go:172] (0xc000777080) (0xc0007fe000) Stream added, broadcasting: 5\nI0530 21:32:13.455490 1556 log.go:172] (0xc000777080) Reply frame received for 5\nI0530 21:32:13.555951 1556 log.go:172] (0xc000777080) Data frame received for 5\nI0530 21:32:13.555981 1556 log.go:172] (0xc0007fe000) (5) Data frame handling\nI0530 21:32:13.556001 1556 log.go:172] (0xc0007fe000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0530 21:32:13.584141 1556 log.go:172] (0xc000777080) Data frame received for 3\nI0530 21:32:13.584185 1556 log.go:172] (0xc000b8c000) (3) Data frame handling\nI0530 21:32:13.584209 1556 log.go:172] (0xc000b8c000) (3) Data frame sent\nI0530 21:32:13.584223 1556 log.go:172] (0xc000777080) Data frame received for 3\nI0530 21:32:13.584236 1556 log.go:172] (0xc000b8c000) (3) Data frame handling\nI0530 21:32:13.584395 1556 log.go:172] (0xc000777080) Data frame received for 5\nI0530 21:32:13.584422 1556 log.go:172] (0xc0007fe000) (5) Data frame handling\nI0530 21:32:13.587070 1556 log.go:172] (0xc000777080) Data frame received for 1\nI0530 21:32:13.587099 1556 log.go:172] (0xc0006d3ea0) (1) Data frame handling\nI0530 21:32:13.587219 1556 log.go:172] (0xc0006d3ea0) (1) Data frame sent\nI0530 21:32:13.587246 1556 log.go:172] (0xc000777080) (0xc0006d3ea0) Stream removed, broadcasting: 1\nI0530 21:32:13.587270 1556 log.go:172] (0xc000777080) Go away received\nI0530 21:32:13.587741 1556 log.go:172] (0xc000777080) (0xc0006d3ea0) Stream removed, broadcasting: 1\nI0530 21:32:13.587764 1556 log.go:172] (0xc000777080) (0xc000b8c000) Stream removed, broadcasting: 3\nI0530 21:32:13.587777 1556 log.go:172] (0xc000777080) (0xc0007fe000) Stream removed, broadcasting: 5\n" May 30 21:32:13.597: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 30 21:32:13.597: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 30 21:32:13.600: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 30 21:32:23.605: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 30 21:32:23.605: INFO: Waiting for statefulset status.replicas updated to 0 May 30 21:32:23.624: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999399s May 30 21:32:24.628: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990502514s May 30 21:32:25.636: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.985578039s May 30 21:32:26.641: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.978050626s May 30 21:32:27.644: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.973283426s May 30 21:32:28.651: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.970143287s May 30 21:32:29.655: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.962970833s May 30 21:32:30.660: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.958755708s May 30 21:32:31.665: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.954158598s May 30 21:32:32.669: INFO: Verifying statefulset ss doesn't scale past 1 for another 948.698265ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7243 May 30 21:32:33.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7243 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:32:33.907: INFO: stderr: "I0530 21:32:33.812337 1591 log.go:172] (0xc00076aa50) (0xc0008e4000) Create stream\nI0530 21:32:33.812426 1591 log.go:172] (0xc00076aa50) (0xc0008e4000) Stream added, broadcasting: 1\nI0530 21:32:33.815432 1591 log.go:172] (0xc00076aa50) Reply frame received for 1\nI0530 21:32:33.815473 1591 log.go:172] (0xc00076aa50) (0xc000a92000) Create stream\nI0530 21:32:33.815485 1591 log.go:172] (0xc00076aa50) (0xc000a92000) Stream added, broadcasting: 3\nI0530 21:32:33.816360 1591 log.go:172] (0xc00076aa50) Reply frame received for 3\nI0530 21:32:33.816399 1591 log.go:172] (0xc00076aa50) (0xc0008e40a0) Create stream\nI0530 21:32:33.816414 1591 log.go:172] (0xc00076aa50) (0xc0008e40a0) Stream added, broadcasting: 5\nI0530 21:32:33.817369 1591 log.go:172] (0xc00076aa50) Reply frame received for 5\nI0530 21:32:33.899472 1591 log.go:172] (0xc00076aa50) Data frame received for 3\nI0530 21:32:33.899511 1591 log.go:172] (0xc000a92000) (3) Data frame handling\nI0530 21:32:33.899522 1591 log.go:172] (0xc000a92000) (3) Data frame sent\nI0530 21:32:33.899528 1591 log.go:172] (0xc00076aa50) Data frame received for 3\nI0530 21:32:33.899535 1591 log.go:172] (0xc000a92000) (3) Data frame handling\nI0530 21:32:33.899562 1591 log.go:172] (0xc00076aa50) Data frame received for 5\nI0530 21:32:33.899575 1591 log.go:172] (0xc0008e40a0) (5) Data frame handling\nI0530 21:32:33.899589 1591 log.go:172] (0xc0008e40a0) (5) Data frame sent\nI0530 21:32:33.899599 1591 log.go:172] (0xc00076aa50) Data frame received for 5\nI0530 21:32:33.899606 1591 log.go:172] (0xc0008e40a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0530 21:32:33.901457 1591 log.go:172] (0xc00076aa50) Data frame received for 1\nI0530 21:32:33.901492 1591 log.go:172] (0xc0008e4000) (1) Data frame handling\nI0530 21:32:33.901517 1591 log.go:172] (0xc0008e4000) (1) Data frame sent\nI0530 21:32:33.901542 1591 log.go:172] (0xc00076aa50) (0xc0008e4000) Stream removed, broadcasting: 1\nI0530 21:32:33.901567 1591 log.go:172] (0xc00076aa50) Go away received\nI0530 21:32:33.901892 1591 log.go:172] (0xc00076aa50) (0xc0008e4000) Stream removed, broadcasting: 1\nI0530 21:32:33.901906 1591 log.go:172] (0xc00076aa50) (0xc000a92000) Stream removed, broadcasting: 3\nI0530 21:32:33.901914 1591 log.go:172] (0xc00076aa50) (0xc0008e40a0) Stream removed, broadcasting: 5\n" May 30 21:32:33.907: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 30 21:32:33.907: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 30 21:32:33.911: INFO: Found 1 stateful pods, waiting for 3 May 30 21:32:43.916: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 30 21:32:43.916: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 30 21:32:43.916: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 30 21:32:43.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7243 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 30 21:32:44.157: INFO: stderr: "I0530 21:32:44.061017 1612 log.go:172] (0xc0009880b0) (0xc000491540) Create stream\nI0530 21:32:44.061078 1612 log.go:172] (0xc0009880b0) (0xc000491540) Stream added, broadcasting: 1\nI0530 21:32:44.063447 1612 log.go:172] (0xc0009880b0) Reply frame received for 1\nI0530 21:32:44.063484 1612 log.go:172] (0xc0009880b0) (0xc00071fae0) Create stream\nI0530 21:32:44.063495 1612 log.go:172] (0xc0009880b0) (0xc00071fae0) Stream added, broadcasting: 3\nI0530 21:32:44.064427 1612 log.go:172] (0xc0009880b0) Reply frame received for 3\nI0530 21:32:44.064455 1612 log.go:172] (0xc0009880b0) (0xc00071fcc0) Create stream\nI0530 21:32:44.064462 1612 log.go:172] (0xc0009880b0) (0xc00071fcc0) Stream added, broadcasting: 5\nI0530 21:32:44.065388 1612 log.go:172] (0xc0009880b0) Reply frame received for 5\nI0530 21:32:44.149590 1612 log.go:172] (0xc0009880b0) Data frame received for 3\nI0530 21:32:44.149633 1612 log.go:172] (0xc0009880b0) Data frame received for 5\nI0530 21:32:44.149685 1612 log.go:172] (0xc00071fcc0) (5) Data frame handling\nI0530 21:32:44.149713 1612 log.go:172] (0xc00071fcc0) (5) Data frame sent\nI0530 21:32:44.149734 1612 log.go:172] (0xc0009880b0) Data frame received for 5\nI0530 21:32:44.149752 1612 log.go:172] (0xc00071fcc0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0530 21:32:44.149772 1612 log.go:172] (0xc00071fae0) (3) Data frame handling\nI0530 21:32:44.149861 1612 log.go:172] (0xc00071fae0) (3) Data frame sent\nI0530 21:32:44.149888 1612 log.go:172] (0xc0009880b0) Data frame received for 3\nI0530 21:32:44.149903 1612 log.go:172] (0xc00071fae0) (3) Data frame handling\nI0530 21:32:44.150983 1612 log.go:172] (0xc0009880b0) Data frame received for 1\nI0530 21:32:44.151011 1612 log.go:172] (0xc000491540) (1) Data frame handling\nI0530 21:32:44.151030 1612 log.go:172] (0xc000491540) (1) Data frame sent\nI0530 21:32:44.151054 1612 log.go:172] (0xc0009880b0) (0xc000491540) Stream removed, broadcasting: 1\nI0530 21:32:44.151412 1612 log.go:172] (0xc0009880b0) (0xc000491540) Stream removed, broadcasting: 1\nI0530 21:32:44.151431 1612 log.go:172] (0xc0009880b0) (0xc00071fae0) Stream removed, broadcasting: 3\nI0530 21:32:44.151439 1612 log.go:172] (0xc0009880b0) (0xc00071fcc0) Stream removed, broadcasting: 5\n" May 30 21:32:44.157: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 30 21:32:44.157: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 30 21:32:44.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7243 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 30 21:32:44.444: INFO: stderr: "I0530 21:32:44.286445 1633 log.go:172] (0xc000114370) (0xc0006efa40) Create stream\nI0530 21:32:44.286512 1633 log.go:172] (0xc000114370) (0xc0006efa40) Stream added, broadcasting: 1\nI0530 21:32:44.289047 1633 log.go:172] (0xc000114370) Reply frame received for 1\nI0530 21:32:44.289082 1633 log.go:172] (0xc000114370) (0xc0009b8000) Create stream\nI0530 21:32:44.289096 1633 log.go:172] (0xc000114370) (0xc0009b8000) Stream added, broadcasting: 3\nI0530 21:32:44.290156 1633 log.go:172] (0xc000114370) Reply frame received for 3\nI0530 21:32:44.290213 1633 log.go:172] (0xc000114370) (0xc000516000) Create stream\nI0530 21:32:44.290230 1633 log.go:172] (0xc000114370) (0xc000516000) Stream added, broadcasting: 5\nI0530 21:32:44.290872 1633 log.go:172] (0xc000114370) Reply frame received for 5\nI0530 21:32:44.338869 1633 log.go:172] (0xc000114370) Data frame received for 5\nI0530 21:32:44.338891 1633 log.go:172] (0xc000516000) (5) Data frame handling\nI0530 21:32:44.338905 1633 log.go:172] (0xc000516000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0530 21:32:44.435088 1633 log.go:172] (0xc000114370) Data frame received for 3\nI0530 21:32:44.435267 1633 log.go:172] (0xc0009b8000) (3) Data frame handling\nI0530 21:32:44.435347 1633 log.go:172] (0xc0009b8000) (3) Data frame sent\nI0530 21:32:44.435419 1633 log.go:172] (0xc000114370) Data frame received for 3\nI0530 21:32:44.435452 1633 log.go:172] (0xc0009b8000) (3) Data frame handling\nI0530 21:32:44.435481 1633 log.go:172] (0xc000114370) Data frame received for 5\nI0530 21:32:44.435511 1633 log.go:172] (0xc000516000) (5) Data frame handling\nI0530 21:32:44.437062 1633 log.go:172] (0xc000114370) Data frame received for 1\nI0530 21:32:44.437089 1633 log.go:172] (0xc0006efa40) (1) Data frame handling\nI0530 21:32:44.437328 1633 log.go:172] (0xc0006efa40) (1) Data frame sent\nI0530 21:32:44.437370 1633 log.go:172] (0xc000114370) (0xc0006efa40) Stream removed, broadcasting: 1\nI0530 21:32:44.437452 1633 log.go:172] (0xc000114370) Go away received\nI0530 21:32:44.437798 1633 log.go:172] (0xc000114370) (0xc0006efa40) Stream removed, broadcasting: 1\nI0530 21:32:44.437828 1633 log.go:172] (0xc000114370) (0xc0009b8000) Stream removed, broadcasting: 3\nI0530 21:32:44.437843 1633 log.go:172] (0xc000114370) (0xc000516000) Stream removed, broadcasting: 5\n" May 30 21:32:44.444: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 30 21:32:44.444: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 30 21:32:44.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7243 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 30 21:32:44.671: INFO: stderr: "I0530 21:32:44.569782 1654 log.go:172] (0xc000a48000) (0xc00070f360) Create stream\nI0530 21:32:44.569831 1654 log.go:172] (0xc000a48000) (0xc00070f360) Stream added, broadcasting: 1\nI0530 21:32:44.572184 1654 log.go:172] (0xc000a48000) Reply frame received for 1\nI0530 21:32:44.572220 1654 log.go:172] (0xc000a48000) (0xc000986000) Create stream\nI0530 21:32:44.572227 1654 log.go:172] (0xc000a48000) (0xc000986000) Stream added, broadcasting: 3\nI0530 21:32:44.573958 1654 log.go:172] (0xc000a48000) Reply frame received for 3\nI0530 21:32:44.574006 1654 log.go:172] (0xc000a48000) (0xc0006f7900) Create stream\nI0530 21:32:44.574028 1654 log.go:172] (0xc000a48000) (0xc0006f7900) Stream added, broadcasting: 5\nI0530 21:32:44.574909 1654 log.go:172] (0xc000a48000) Reply frame received for 5\nI0530 21:32:44.632147 1654 log.go:172] (0xc000a48000) Data frame received for 5\nI0530 21:32:44.632198 1654 log.go:172] (0xc0006f7900) (5) Data frame handling\nI0530 21:32:44.632238 1654 log.go:172] (0xc0006f7900) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0530 21:32:44.663322 1654 log.go:172] (0xc000a48000) Data frame received for 3\nI0530 21:32:44.663366 1654 log.go:172] (0xc000986000) (3) Data frame handling\nI0530 21:32:44.663397 1654 log.go:172] (0xc000986000) (3) Data frame sent\nI0530 21:32:44.663678 1654 log.go:172] (0xc000a48000) Data frame received for 3\nI0530 21:32:44.663712 1654 log.go:172] (0xc000986000) (3) Data frame handling\nI0530 21:32:44.663802 1654 log.go:172] (0xc000a48000) Data frame received for 5\nI0530 21:32:44.663831 1654 log.go:172] (0xc0006f7900) (5) Data frame handling\nI0530 21:32:44.666235 1654 log.go:172] (0xc000a48000) Data frame received for 1\nI0530 21:32:44.666267 1654 log.go:172] (0xc00070f360) (1) Data frame handling\nI0530 21:32:44.666293 1654 log.go:172] (0xc00070f360) (1) Data frame sent\nI0530 21:32:44.666314 1654 log.go:172] (0xc000a48000) (0xc00070f360) Stream removed, broadcasting: 1\nI0530 21:32:44.666333 1654 log.go:172] (0xc000a48000) Go away received\nI0530 21:32:44.666808 1654 log.go:172] (0xc000a48000) (0xc00070f360) Stream removed, broadcasting: 1\nI0530 21:32:44.666858 1654 log.go:172] (0xc000a48000) (0xc000986000) Stream removed, broadcasting: 3\nI0530 21:32:44.666883 1654 log.go:172] (0xc000a48000) (0xc0006f7900) Stream removed, broadcasting: 5\n" May 30 21:32:44.671: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 30 21:32:44.671: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 30 21:32:44.671: INFO: Waiting for statefulset status.replicas updated to 0 May 30 21:32:44.678: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 30 21:32:54.721: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 30 21:32:54.721: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 30 21:32:54.721: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 30 21:32:54.778: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999429s May 30 21:32:55.793: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.95026754s May 30 21:32:56.799: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.935036637s May 30 21:32:57.818: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.928985893s May 30 21:32:58.823: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.910342979s May 30 21:32:59.827: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.90533191s May 30 21:33:00.832: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.901255056s May 30 21:33:01.836: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.895834815s May 30 21:33:02.841: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.892002415s May 30 21:33:03.859: INFO: Verifying statefulset ss doesn't scale past 3 for another 887.13763ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7243 May 30 21:33:04.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7243 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:33:05.107: INFO: stderr: "I0530 21:33:05.007290 1676 log.go:172] (0xc0000f4bb0) (0xc0006cfc20) Create stream\nI0530 21:33:05.007346 1676 log.go:172] (0xc0000f4bb0) (0xc0006cfc20) Stream added, broadcasting: 1\nI0530 21:33:05.009686 1676 log.go:172] (0xc0000f4bb0) Reply frame received for 1\nI0530 21:33:05.009742 1676 log.go:172] (0xc0000f4bb0) (0xc0006cfe00) Create stream\nI0530 21:33:05.009757 1676 log.go:172] (0xc0000f4bb0) (0xc0006cfe00) Stream added, broadcasting: 3\nI0530 21:33:05.010749 1676 log.go:172] (0xc0000f4bb0) Reply frame received for 3\nI0530 21:33:05.010789 1676 log.go:172] (0xc0000f4bb0) (0xc0007620a0) Create stream\nI0530 21:33:05.010803 1676 log.go:172] (0xc0000f4bb0) (0xc0007620a0) Stream added, broadcasting: 5\nI0530 21:33:05.011855 1676 log.go:172] (0xc0000f4bb0) Reply frame received for 5\nI0530 21:33:05.098703 1676 log.go:172] (0xc0000f4bb0) Data frame received for 5\nI0530 21:33:05.098728 1676 log.go:172] (0xc0007620a0) (5) Data frame handling\nI0530 21:33:05.098738 1676 log.go:172] (0xc0007620a0) (5) Data frame sent\nI0530 21:33:05.098745 1676 log.go:172] (0xc0000f4bb0) Data frame received for 5\nI0530 21:33:05.098751 1676 log.go:172] (0xc0007620a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0530 21:33:05.098809 1676 log.go:172] (0xc0000f4bb0) Data frame received for 3\nI0530 21:33:05.098857 1676 log.go:172] (0xc0006cfe00) (3) Data frame handling\nI0530 21:33:05.098898 1676 log.go:172] (0xc0006cfe00) (3) Data frame sent\nI0530 21:33:05.098921 1676 log.go:172] (0xc0000f4bb0) Data frame received for 3\nI0530 21:33:05.098938 1676 log.go:172] (0xc0006cfe00) (3) Data frame handling\nI0530 21:33:05.100563 1676 log.go:172] (0xc0000f4bb0) Data frame received for 1\nI0530 21:33:05.100586 1676 log.go:172] (0xc0006cfc20) (1) Data frame handling\nI0530 21:33:05.100604 1676 log.go:172] (0xc0006cfc20) (1) Data frame sent\nI0530 21:33:05.100624 1676 log.go:172] (0xc0000f4bb0) (0xc0006cfc20) Stream removed, broadcasting: 1\nI0530 21:33:05.100670 1676 log.go:172] (0xc0000f4bb0) Go away received\nI0530 21:33:05.101105 1676 log.go:172] (0xc0000f4bb0) (0xc0006cfc20) Stream removed, broadcasting: 1\nI0530 21:33:05.101317 1676 log.go:172] (0xc0000f4bb0) (0xc0006cfe00) Stream removed, broadcasting: 3\nI0530 21:33:05.101329 1676 log.go:172] (0xc0000f4bb0) (0xc0007620a0) Stream removed, broadcasting: 5\n" May 30 21:33:05.107: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 30 21:33:05.107: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 30 21:33:05.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7243 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:33:05.307: INFO: stderr: "I0530 21:33:05.234945 1697 log.go:172] (0xc000a3c630) (0xc000aae140) Create stream\nI0530 21:33:05.235016 1697 log.go:172] (0xc000a3c630) (0xc000aae140) Stream added, broadcasting: 1\nI0530 21:33:05.237802 1697 log.go:172] (0xc000a3c630) Reply frame received for 1\nI0530 21:33:05.237847 1697 log.go:172] (0xc000a3c630) (0xc000aae1e0) Create stream\nI0530 21:33:05.237857 1697 log.go:172] (0xc000a3c630) (0xc000aae1e0) Stream added, broadcasting: 3\nI0530 21:33:05.238788 1697 log.go:172] (0xc000a3c630) Reply frame received for 3\nI0530 21:33:05.238827 1697 log.go:172] (0xc000a3c630) (0xc00023f540) Create stream\nI0530 21:33:05.238844 1697 log.go:172] (0xc000a3c630) (0xc00023f540) Stream added, broadcasting: 5\nI0530 21:33:05.239626 1697 log.go:172] (0xc000a3c630) Reply frame received for 5\nI0530 21:33:05.297603 1697 log.go:172] (0xc000a3c630) Data frame received for 3\nI0530 21:33:05.297635 1697 log.go:172] (0xc000aae1e0) (3) Data frame handling\nI0530 21:33:05.297655 1697 log.go:172] (0xc000aae1e0) (3) Data frame sent\nI0530 21:33:05.297668 1697 log.go:172] (0xc000a3c630) Data frame received for 3\nI0530 21:33:05.297688 1697 log.go:172] (0xc000aae1e0) (3) Data frame handling\nI0530 21:33:05.298137 1697 log.go:172] (0xc000a3c630) Data frame received for 5\nI0530 21:33:05.298154 1697 log.go:172] (0xc00023f540) (5) Data frame handling\nI0530 21:33:05.298173 1697 log.go:172] (0xc00023f540) (5) Data frame sent\nI0530 21:33:05.298184 1697 log.go:172] (0xc000a3c630) Data frame received for 5\nI0530 21:33:05.298196 1697 log.go:172] (0xc00023f540) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0530 21:33:05.299701 1697 log.go:172] (0xc000a3c630) Data frame received for 1\nI0530 21:33:05.299720 1697 log.go:172] (0xc000aae140) (1) Data frame handling\nI0530 21:33:05.299732 1697 log.go:172] (0xc000aae140) (1) Data frame sent\nI0530 21:33:05.299749 1697 log.go:172] (0xc000a3c630) (0xc000aae140) Stream removed, broadcasting: 1\nI0530 21:33:05.299989 1697 log.go:172] (0xc000a3c630) (0xc000aae140) Stream removed, broadcasting: 1\nI0530 21:33:05.300003 1697 log.go:172] (0xc000a3c630) (0xc000aae1e0) Stream removed, broadcasting: 3\nI0530 21:33:05.300047 1697 log.go:172] (0xc000a3c630) Go away received\nI0530 21:33:05.300103 1697 log.go:172] (0xc000a3c630) (0xc00023f540) Stream removed, broadcasting: 5\n" May 30 21:33:05.307: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 30 21:33:05.307: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 30 21:33:05.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7243 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 21:33:05.510: INFO: stderr: "I0530 21:33:05.437517 1719 log.go:172] (0xc0009aa630) (0xc000a36000) Create stream\nI0530 21:33:05.437569 1719 log.go:172] (0xc0009aa630) (0xc000a36000) Stream added, broadcasting: 1\nI0530 21:33:05.439862 1719 log.go:172] (0xc0009aa630) Reply frame received for 1\nI0530 21:33:05.439909 1719 log.go:172] (0xc0009aa630) (0xc000a92000) Create stream\nI0530 21:33:05.439923 1719 log.go:172] (0xc0009aa630) (0xc000a92000) Stream added, broadcasting: 3\nI0530 21:33:05.440809 1719 log.go:172] (0xc0009aa630) Reply frame received for 3\nI0530 21:33:05.440844 1719 log.go:172] (0xc0009aa630) (0xc000a360a0) Create stream\nI0530 21:33:05.440859 1719 log.go:172] (0xc0009aa630) (0xc000a360a0) Stream added, broadcasting: 5\nI0530 21:33:05.441975 1719 log.go:172] (0xc0009aa630) Reply frame received for 5\nI0530 21:33:05.501966 1719 log.go:172] (0xc0009aa630) Data frame received for 5\nI0530 21:33:05.501991 1719 log.go:172] (0xc000a360a0) (5) Data frame handling\nI0530 21:33:05.502005 1719 log.go:172] (0xc000a360a0) (5) Data frame sent\nI0530 21:33:05.502014 1719 log.go:172] (0xc0009aa630) Data frame received for 5\nI0530 21:33:05.502020 1719 log.go:172] (0xc000a360a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0530 21:33:05.502064 1719 log.go:172] (0xc0009aa630) Data frame received for 3\nI0530 21:33:05.502094 1719 log.go:172] (0xc000a92000) (3) Data frame handling\nI0530 21:33:05.502127 1719 log.go:172] (0xc000a92000) (3) Data frame sent\nI0530 21:33:05.502140 1719 log.go:172] (0xc0009aa630) Data frame received for 3\nI0530 21:33:05.502149 1719 log.go:172] (0xc000a92000) (3) Data frame handling\nI0530 21:33:05.503539 1719 log.go:172] (0xc0009aa630) Data frame received for 1\nI0530 21:33:05.503562 1719 log.go:172] (0xc000a36000) (1) Data frame handling\nI0530 21:33:05.503732 1719 log.go:172] (0xc000a36000) (1) Data frame sent\nI0530 21:33:05.503746 1719 log.go:172] (0xc0009aa630) (0xc000a36000) Stream removed, broadcasting: 1\nI0530 21:33:05.503763 1719 log.go:172] (0xc0009aa630) Go away received\nI0530 21:33:05.504133 1719 log.go:172] (0xc0009aa630) (0xc000a36000) Stream removed, broadcasting: 1\nI0530 21:33:05.504155 1719 log.go:172] (0xc0009aa630) (0xc000a92000) Stream removed, broadcasting: 3\nI0530 21:33:05.504170 1719 log.go:172] (0xc0009aa630) (0xc000a360a0) Stream removed, broadcasting: 5\n" May 30 21:33:05.510: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 30 21:33:05.510: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 30 21:33:05.510: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 30 21:33:35.526: INFO: Deleting all statefulset in ns statefulset-7243 May 30 21:33:35.529: INFO: Scaling statefulset ss to 0 May 30 21:33:35.538: INFO: Waiting for statefulset status.replicas updated to 0 May 30 21:33:35.540: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:33:35.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7243" for this suite. • [SLOW TEST:95.003 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":50,"skipped":805,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:33:35.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 30 21:33:35.676: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4e119ba8-08fe-4363-a32e-4cc9beee73db" in namespace "projected-8913" to be "success or failure" May 30 21:33:35.679: INFO: Pod "downwardapi-volume-4e119ba8-08fe-4363-a32e-4cc9beee73db": Phase="Pending", Reason="", readiness=false. Elapsed: 3.369454ms May 30 21:33:37.683: INFO: Pod "downwardapi-volume-4e119ba8-08fe-4363-a32e-4cc9beee73db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007572007s May 30 21:33:39.687: INFO: Pod "downwardapi-volume-4e119ba8-08fe-4363-a32e-4cc9beee73db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011665064s STEP: Saw pod success May 30 21:33:39.687: INFO: Pod "downwardapi-volume-4e119ba8-08fe-4363-a32e-4cc9beee73db" satisfied condition "success or failure" May 30 21:33:39.691: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-4e119ba8-08fe-4363-a32e-4cc9beee73db container client-container: STEP: delete the pod May 30 21:33:39.728: INFO: Waiting for pod downwardapi-volume-4e119ba8-08fe-4363-a32e-4cc9beee73db to disappear May 30 21:33:39.768: INFO: Pod downwardapi-volume-4e119ba8-08fe-4363-a32e-4cc9beee73db no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:33:39.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8913" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":811,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:33:39.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:33:55.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8871" for this suite. • [SLOW TEST:16.138 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":52,"skipped":816,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:33:55.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 30 21:33:56.017: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d4ce6be2-9a55-4666-b48b-43376b6ddf6e" in namespace "projected-9110" to be "success or failure" May 30 21:33:56.021: INFO: Pod "downwardapi-volume-d4ce6be2-9a55-4666-b48b-43376b6ddf6e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.988029ms May 30 21:33:58.025: INFO: Pod "downwardapi-volume-d4ce6be2-9a55-4666-b48b-43376b6ddf6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008050364s May 30 21:34:00.029: INFO: Pod "downwardapi-volume-d4ce6be2-9a55-4666-b48b-43376b6ddf6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012209459s STEP: Saw pod success May 30 21:34:00.029: INFO: Pod "downwardapi-volume-d4ce6be2-9a55-4666-b48b-43376b6ddf6e" satisfied condition "success or failure" May 30 21:34:00.031: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-d4ce6be2-9a55-4666-b48b-43376b6ddf6e container client-container: STEP: delete the pod May 30 21:34:00.058: INFO: Waiting for pod downwardapi-volume-d4ce6be2-9a55-4666-b48b-43376b6ddf6e to disappear May 30 21:34:00.106: INFO: Pod downwardapi-volume-d4ce6be2-9a55-4666-b48b-43376b6ddf6e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:34:00.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9110" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":831,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:34:00.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 30 21:34:00.213: INFO: Waiting up to 5m0s for pod "pod-60a48a9a-9d17-4c97-9e03-5d45c51e35fa" in namespace "emptydir-5773" to be "success or failure" May 30 21:34:00.219: INFO: Pod "pod-60a48a9a-9d17-4c97-9e03-5d45c51e35fa": Phase="Pending", Reason="", readiness=false. Elapsed: 5.76032ms May 30 21:34:02.223: INFO: Pod "pod-60a48a9a-9d17-4c97-9e03-5d45c51e35fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009564912s May 30 21:34:04.227: INFO: Pod "pod-60a48a9a-9d17-4c97-9e03-5d45c51e35fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013801929s STEP: Saw pod success May 30 21:34:04.227: INFO: Pod "pod-60a48a9a-9d17-4c97-9e03-5d45c51e35fa" satisfied condition "success or failure" May 30 21:34:04.230: INFO: Trying to get logs from node jerma-worker pod pod-60a48a9a-9d17-4c97-9e03-5d45c51e35fa container test-container: STEP: delete the pod May 30 21:34:04.250: INFO: Waiting for pod pod-60a48a9a-9d17-4c97-9e03-5d45c51e35fa to disappear May 30 21:34:04.308: INFO: Pod pod-60a48a9a-9d17-4c97-9e03-5d45c51e35fa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:34:04.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5773" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":836,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:34:04.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 30 21:34:08.563: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:34:08.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9520" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":875,"failed":0} SS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:34:08.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-cd4b28a9-6dd7-4ead-96fb-15cd6e2ca7d0 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-cd4b28a9-6dd7-4ead-96fb-15cd6e2ca7d0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:35:37.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9135" for this suite. • [SLOW TEST:88.595 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":877,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:35:37.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:36:08.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4613" for this suite. STEP: Destroying namespace "nsdeletetest-4373" for this suite. May 30 21:36:08.457: INFO: Namespace nsdeletetest-4373 was already deleted STEP: Destroying namespace "nsdeletetest-4004" for this suite. • [SLOW TEST:31.258 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":57,"skipped":885,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:36:08.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:36:08.570: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:36:12.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-675" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":901,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:36:12.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-ef3399c7-fd49-4f42-ba20-5878e2e87acf STEP: Creating configMap with name cm-test-opt-upd-88da3a24-b10a-4ec7-83e7-b9275e5dae42 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ef3399c7-fd49-4f42-ba20-5878e2e87acf STEP: Updating configmap cm-test-opt-upd-88da3a24-b10a-4ec7-83e7-b9275e5dae42 STEP: Creating configMap with name cm-test-opt-create-ea6253ac-00e5-4b13-9b78-45e70a2b4eb1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:36:20.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3750" for this suite. • [SLOW TEST:8.172 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":934,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:36:20.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command May 30 21:36:21.000: INFO: Waiting up to 5m0s for pod "var-expansion-f13ec780-f9e0-4234-806e-0559599972b5" in namespace "var-expansion-4237" to be "success or failure" May 30 21:36:21.013: INFO: Pod "var-expansion-f13ec780-f9e0-4234-806e-0559599972b5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.263294ms May 30 21:36:23.018: INFO: Pod "var-expansion-f13ec780-f9e0-4234-806e-0559599972b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017753645s May 30 21:36:25.023: INFO: Pod "var-expansion-f13ec780-f9e0-4234-806e-0559599972b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022666367s STEP: Saw pod success May 30 21:36:25.023: INFO: Pod "var-expansion-f13ec780-f9e0-4234-806e-0559599972b5" satisfied condition "success or failure" May 30 21:36:25.026: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-f13ec780-f9e0-4234-806e-0559599972b5 container dapi-container: STEP: delete the pod May 30 21:36:25.084: INFO: Waiting for pod var-expansion-f13ec780-f9e0-4234-806e-0559599972b5 to disappear May 30 21:36:25.090: INFO: Pod var-expansion-f13ec780-f9e0-4234-806e-0559599972b5 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:36:25.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4237" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":939,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:36:25.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments May 30 21:36:25.175: INFO: Waiting up to 5m0s for pod "client-containers-e718e23c-c885-459b-8a87-0a57748f6eff" in namespace "containers-3378" to be "success or failure" May 30 21:36:25.179: INFO: Pod "client-containers-e718e23c-c885-459b-8a87-0a57748f6eff": Phase="Pending", Reason="", readiness=false. Elapsed: 3.898284ms May 30 21:36:27.183: INFO: Pod "client-containers-e718e23c-c885-459b-8a87-0a57748f6eff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007949798s May 30 21:36:29.187: INFO: Pod "client-containers-e718e23c-c885-459b-8a87-0a57748f6eff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012236839s STEP: Saw pod success May 30 21:36:29.187: INFO: Pod "client-containers-e718e23c-c885-459b-8a87-0a57748f6eff" satisfied condition "success or failure" May 30 21:36:29.190: INFO: Trying to get logs from node jerma-worker2 pod client-containers-e718e23c-c885-459b-8a87-0a57748f6eff container test-container: STEP: delete the pod May 30 21:36:29.425: INFO: Waiting for pod client-containers-e718e23c-c885-459b-8a87-0a57748f6eff to disappear May 30 21:36:29.449: INFO: Pod client-containers-e718e23c-c885-459b-8a87-0a57748f6eff no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:36:29.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3378" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":948,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:36:29.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 30 21:36:29.606: INFO: namespace kubectl-3567 May 30 21:36:29.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3567' May 30 21:36:31.972: INFO: stderr: "" May 30 21:36:31.972: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 30 21:36:32.977: INFO: Selector matched 1 pods for map[app:agnhost] May 30 21:36:32.977: INFO: Found 0 / 1 May 30 21:36:33.977: INFO: Selector matched 1 pods for map[app:agnhost] May 30 21:36:33.977: INFO: Found 0 / 1 May 30 21:36:34.977: INFO: Selector matched 1 pods for map[app:agnhost] May 30 21:36:34.977: INFO: Found 1 / 1 May 30 21:36:34.977: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 30 21:36:34.980: INFO: Selector matched 1 pods for map[app:agnhost] May 30 21:36:34.980: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 30 21:36:34.980: INFO: wait on agnhost-master startup in kubectl-3567 May 30 21:36:34.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-nnw25 agnhost-master --namespace=kubectl-3567' May 30 21:36:35.092: INFO: stderr: "" May 30 21:36:35.092: INFO: stdout: "Paused\n" STEP: exposing RC May 30 21:36:35.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3567' May 30 21:36:35.262: INFO: stderr: "" May 30 21:36:35.262: INFO: stdout: "service/rm2 exposed\n" May 30 21:36:35.284: INFO: Service rm2 in namespace kubectl-3567 found. STEP: exposing service May 30 21:36:37.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3567' May 30 21:36:37.464: INFO: stderr: "" May 30 21:36:37.464: INFO: stdout: "service/rm3 exposed\n" May 30 21:36:37.480: INFO: Service rm3 in namespace kubectl-3567 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:36:39.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3567" for this suite. • [SLOW TEST:9.993 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":62,"skipped":952,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:36:39.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-4558 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4558 STEP: Deleting pre-stop pod May 30 21:36:52.688: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:36:52.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4558" for this suite. • [SLOW TEST:13.217 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":63,"skipped":1029,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:36:52.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:36:52.769: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 30 21:36:52.803: INFO: Pod name sample-pod: Found 0 pods out of 1 May 30 21:36:57.810: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 30 21:36:57.810: INFO: Creating deployment "test-rolling-update-deployment" May 30 21:36:57.816: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 30 21:36:57.844: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 30 21:36:59.851: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 30 21:36:59.853: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471417, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471417, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471417, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471417, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 21:37:01.858: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 30 21:37:01.867: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-5710 /apis/apps/v1/namespaces/deployment-5710/deployments/test-rolling-update-deployment 10f64d60-2b03-46d2-b566-8f55e089576b 20430486 1 2020-05-30 21:36:57 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001f0c3d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-30 21:36:57 +0000 UTC,LastTransitionTime:2020-05-30 21:36:57 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-05-30 21:37:01 +0000 UTC,LastTransitionTime:2020-05-30 21:36:57 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 30 21:37:01.870: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-5710 /apis/apps/v1/namespaces/deployment-5710/replicasets/test-rolling-update-deployment-67cf4f6444 f71e997e-2f9e-4999-868a-e990cd434018 20430475 1 2020-05-30 21:36:57 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 10f64d60-2b03-46d2-b566-8f55e089576b 0xc001f0cc37 0xc001f0cc38}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001f0cca8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 30 21:37:01.870: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 30 21:37:01.870: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-5710 /apis/apps/v1/namespaces/deployment-5710/replicasets/test-rolling-update-controller 7d72d7f7-baf1-4b57-8063-b98099cfa14d 20430485 2 2020-05-30 21:36:52 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 10f64d60-2b03-46d2-b566-8f55e089576b 0xc001f0cb3f 0xc001f0cb50}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001f0cbb8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 30 21:37:01.873: INFO: Pod "test-rolling-update-deployment-67cf4f6444-w7f7p" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-w7f7p test-rolling-update-deployment-67cf4f6444- deployment-5710 /api/v1/namespaces/deployment-5710/pods/test-rolling-update-deployment-67cf4f6444-w7f7p 263b8354-791a-4863-9e77-49fc410ce577 20430474 0 2020-05-30 21:36:57 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 f71e997e-2f9e-4999-868a-e990cd434018 0xc001f0d107 0xc001f0d108}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hwgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hwgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hwgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 21:36:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 21:37:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 21:37:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 21:36:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.243,StartTime:2020-05-30 21:36:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 21:37:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://929fe69f73f6835264ab9041dc7959fdfa1e8d69668e1ed3cdefc1fd4f75d6cf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.243,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:37:01.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5710" for this suite. • [SLOW TEST:9.172 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":64,"skipped":1043,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:37:01.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3457 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3457 STEP: Creating statefulset with conflicting port in namespace statefulset-3457 STEP: Waiting until pod test-pod will start running in namespace statefulset-3457 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3457 May 30 21:37:06.092: INFO: Observed stateful pod in namespace: statefulset-3457, name: ss-0, uid: 376233e6-379a-4bbf-a4d1-5d33383498ca, status phase: Pending. Waiting for statefulset controller to delete. May 30 21:37:06.630: INFO: Observed stateful pod in namespace: statefulset-3457, name: ss-0, uid: 376233e6-379a-4bbf-a4d1-5d33383498ca, status phase: Failed. Waiting for statefulset controller to delete. May 30 21:37:06.651: INFO: Observed stateful pod in namespace: statefulset-3457, name: ss-0, uid: 376233e6-379a-4bbf-a4d1-5d33383498ca, status phase: Failed. Waiting for statefulset controller to delete. May 30 21:37:06.707: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3457 STEP: Removing pod with conflicting port in namespace statefulset-3457 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3457 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 30 21:37:10.734: INFO: Deleting all statefulset in ns statefulset-3457 May 30 21:37:10.737: INFO: Scaling statefulset ss to 0 May 30 21:37:20.762: INFO: Waiting for statefulset status.replicas updated to 0 May 30 21:37:20.765: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:37:20.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3457" for this suite. • [SLOW TEST:18.908 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":65,"skipped":1043,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:37:20.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-5a742926-9fc0-422b-983c-3763e3e1e701 STEP: Creating a pod to test consume secrets May 30 21:37:20.858: INFO: Waiting up to 5m0s for pod "pod-secrets-fbf2310e-069b-46b9-98a3-4c1e01ce1810" in namespace "secrets-6196" to be "success or failure" May 30 21:37:20.878: INFO: Pod "pod-secrets-fbf2310e-069b-46b9-98a3-4c1e01ce1810": Phase="Pending", Reason="", readiness=false. Elapsed: 20.328899ms May 30 21:37:22.882: INFO: Pod "pod-secrets-fbf2310e-069b-46b9-98a3-4c1e01ce1810": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024614844s May 30 21:37:24.885: INFO: Pod "pod-secrets-fbf2310e-069b-46b9-98a3-4c1e01ce1810": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027872167s STEP: Saw pod success May 30 21:37:24.885: INFO: Pod "pod-secrets-fbf2310e-069b-46b9-98a3-4c1e01ce1810" satisfied condition "success or failure" May 30 21:37:24.888: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-fbf2310e-069b-46b9-98a3-4c1e01ce1810 container secret-volume-test: STEP: delete the pod May 30 21:37:24.908: INFO: Waiting for pod pod-secrets-fbf2310e-069b-46b9-98a3-4c1e01ce1810 to disappear May 30 21:37:24.948: INFO: Pod pod-secrets-fbf2310e-069b-46b9-98a3-4c1e01ce1810 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:37:24.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6196" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1050,"failed":0} ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:37:24.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 30 21:37:25.020: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:37:32.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8772" for this suite. • [SLOW TEST:7.784 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":67,"skipped":1050,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:37:32.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 30 21:37:32.810: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e666f7e8-a053-4029-b737-7bc0132b52e6" in namespace "downward-api-6618" to be "success or failure" May 30 21:37:32.812: INFO: Pod "downwardapi-volume-e666f7e8-a053-4029-b737-7bc0132b52e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.431967ms May 30 21:37:34.817: INFO: Pod "downwardapi-volume-e666f7e8-a053-4029-b737-7bc0132b52e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006893086s May 30 21:37:36.824: INFO: Pod "downwardapi-volume-e666f7e8-a053-4029-b737-7bc0132b52e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014407753s STEP: Saw pod success May 30 21:37:36.825: INFO: Pod "downwardapi-volume-e666f7e8-a053-4029-b737-7bc0132b52e6" satisfied condition "success or failure" May 30 21:37:36.829: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-e666f7e8-a053-4029-b737-7bc0132b52e6 container client-container: STEP: delete the pod May 30 21:37:36.858: INFO: Waiting for pod downwardapi-volume-e666f7e8-a053-4029-b737-7bc0132b52e6 to disappear May 30 21:37:36.872: INFO: Pod downwardapi-volume-e666f7e8-a053-4029-b737-7bc0132b52e6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:37:36.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6618" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1078,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:37:36.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2090 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 30 21:37:37.243: INFO: Found 0 stateful pods, waiting for 3 May 30 21:37:47.246: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 30 21:37:47.246: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 30 21:37:47.246: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 30 21:37:57.248: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 30 21:37:57.248: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 30 21:37:57.248: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 30 21:37:57.270: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 30 21:38:07.311: INFO: Updating stateful set ss2 May 30 21:38:07.352: INFO: Waiting for Pod statefulset-2090/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 30 21:38:17.500: INFO: Found 2 stateful pods, waiting for 3 May 30 21:38:27.504: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 30 21:38:27.504: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 30 21:38:27.504: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 30 21:38:27.528: INFO: Updating stateful set ss2 May 30 21:38:27.582: INFO: Waiting for Pod statefulset-2090/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 30 21:38:37.606: INFO: Updating stateful set ss2 May 30 21:38:37.617: INFO: Waiting for StatefulSet statefulset-2090/ss2 to complete update May 30 21:38:37.617: INFO: Waiting for Pod statefulset-2090/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 30 21:38:47.624: INFO: Deleting all statefulset in ns statefulset-2090 May 30 21:38:47.626: INFO: Scaling statefulset ss2 to 0 May 30 21:39:17.650: INFO: Waiting for statefulset status.replicas updated to 0 May 30 21:39:17.653: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:39:17.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2090" for this suite. • [SLOW TEST:100.771 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":69,"skipped":1081,"failed":0} [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:39:17.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0530 21:39:30.152813 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 30 21:39:30.152: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:39:30.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1886" for this suite. • [SLOW TEST:12.742 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":70,"skipped":1081,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:39:30.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 21:39:31.316: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 21:39:33.326: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471571, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471571, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471571, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471571, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 21:39:35.331: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471571, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471571, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471571, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471571, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 21:39:38.369: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:39:38.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-975-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:39:39.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5426" for this suite. STEP: Destroying namespace "webhook-5426-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.812 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":71,"skipped":1086,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:39:39.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-44e6e4dd-0771-487b-884b-34f32ea34619 STEP: Creating a pod to test consume secrets May 30 21:39:39.332: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1cbf5a4c-bb49-4a76-8863-7c0d2b6b3bd8" in namespace "projected-4140" to be "success or failure" May 30 21:39:39.348: INFO: Pod "pod-projected-secrets-1cbf5a4c-bb49-4a76-8863-7c0d2b6b3bd8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.820886ms May 30 21:39:41.352: INFO: Pod "pod-projected-secrets-1cbf5a4c-bb49-4a76-8863-7c0d2b6b3bd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020065152s May 30 21:39:43.356: INFO: Pod "pod-projected-secrets-1cbf5a4c-bb49-4a76-8863-7c0d2b6b3bd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024527576s STEP: Saw pod success May 30 21:39:43.356: INFO: Pod "pod-projected-secrets-1cbf5a4c-bb49-4a76-8863-7c0d2b6b3bd8" satisfied condition "success or failure" May 30 21:39:43.360: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-1cbf5a4c-bb49-4a76-8863-7c0d2b6b3bd8 container secret-volume-test: STEP: delete the pod May 30 21:39:43.393: INFO: Waiting for pod pod-projected-secrets-1cbf5a4c-bb49-4a76-8863-7c0d2b6b3bd8 to disappear May 30 21:39:43.439: INFO: Pod pod-projected-secrets-1cbf5a4c-bb49-4a76-8863-7c0d2b6b3bd8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:39:43.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4140" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1088,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:39:43.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-bf202d64-f0de-4082-9cf3-57f96c4cb76a STEP: Creating a pod to test consume configMaps May 30 21:39:43.568: INFO: Waiting up to 5m0s for pod "pod-configmaps-25e9dade-d798-4d85-b76c-8e3359578773" in namespace "configmap-2149" to be "success or failure" May 30 21:39:43.590: INFO: Pod "pod-configmaps-25e9dade-d798-4d85-b76c-8e3359578773": Phase="Pending", Reason="", readiness=false. Elapsed: 21.725941ms May 30 21:39:45.594: INFO: Pod "pod-configmaps-25e9dade-d798-4d85-b76c-8e3359578773": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026164122s May 30 21:39:47.598: INFO: Pod "pod-configmaps-25e9dade-d798-4d85-b76c-8e3359578773": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029917426s STEP: Saw pod success May 30 21:39:47.598: INFO: Pod "pod-configmaps-25e9dade-d798-4d85-b76c-8e3359578773" satisfied condition "success or failure" May 30 21:39:47.603: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-25e9dade-d798-4d85-b76c-8e3359578773 container configmap-volume-test: STEP: delete the pod May 30 21:39:47.644: INFO: Waiting for pod pod-configmaps-25e9dade-d798-4d85-b76c-8e3359578773 to disappear May 30 21:39:47.655: INFO: Pod pod-configmaps-25e9dade-d798-4d85-b76c-8e3359578773 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:39:47.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2149" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1139,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:39:47.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:39:47.765: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:39:51.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2561" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1147,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:39:51.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 30 21:39:56.489: INFO: Successfully updated pod "annotationupdate57fec541-47fc-4d69-ad00-8fe817abb4b6" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:39:58.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3427" for this suite. • [SLOW TEST:6.719 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1172,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:39:58.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 30 21:39:58.638: INFO: Waiting up to 5m0s for pod "pod-2b7d3dae-a4d9-4290-a351-68f192ff96d6" in namespace "emptydir-8416" to be "success or failure" May 30 21:39:58.642: INFO: Pod "pod-2b7d3dae-a4d9-4290-a351-68f192ff96d6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.27634ms May 30 21:40:00.667: INFO: Pod "pod-2b7d3dae-a4d9-4290-a351-68f192ff96d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028420466s May 30 21:40:02.679: INFO: Pod "pod-2b7d3dae-a4d9-4290-a351-68f192ff96d6": Phase="Running", Reason="", readiness=true. Elapsed: 4.040247778s May 30 21:40:04.683: INFO: Pod "pod-2b7d3dae-a4d9-4290-a351-68f192ff96d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044387912s STEP: Saw pod success May 30 21:40:04.683: INFO: Pod "pod-2b7d3dae-a4d9-4290-a351-68f192ff96d6" satisfied condition "success or failure" May 30 21:40:04.686: INFO: Trying to get logs from node jerma-worker2 pod pod-2b7d3dae-a4d9-4290-a351-68f192ff96d6 container test-container: STEP: delete the pod May 30 21:40:04.741: INFO: Waiting for pod pod-2b7d3dae-a4d9-4290-a351-68f192ff96d6 to disappear May 30 21:40:04.745: INFO: Pod pod-2b7d3dae-a4d9-4290-a351-68f192ff96d6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:40:04.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8416" for this suite. • [SLOW TEST:6.217 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1182,"failed":0} S ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:40:04.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 30 21:40:04.813: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a387a604-839f-4a19-b63e-499bd1ceac8a" in namespace "downward-api-7004" to be "success or failure" May 30 21:40:04.871: INFO: Pod "downwardapi-volume-a387a604-839f-4a19-b63e-499bd1ceac8a": Phase="Pending", Reason="", readiness=false. Elapsed: 57.859262ms May 30 21:40:06.875: INFO: Pod "downwardapi-volume-a387a604-839f-4a19-b63e-499bd1ceac8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062261937s May 30 21:40:08.901: INFO: Pod "downwardapi-volume-a387a604-839f-4a19-b63e-499bd1ceac8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088183037s STEP: Saw pod success May 30 21:40:08.901: INFO: Pod "downwardapi-volume-a387a604-839f-4a19-b63e-499bd1ceac8a" satisfied condition "success or failure" May 30 21:40:08.904: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a387a604-839f-4a19-b63e-499bd1ceac8a container client-container: STEP: delete the pod May 30 21:40:08.927: INFO: Waiting for pod downwardapi-volume-a387a604-839f-4a19-b63e-499bd1ceac8a to disappear May 30 21:40:08.943: INFO: Pod downwardapi-volume-a387a604-839f-4a19-b63e-499bd1ceac8a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:40:08.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7004" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1183,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:40:08.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-46sr7 in namespace proxy-5242 I0530 21:40:09.283723 6 runners.go:189] Created replication controller with name: proxy-service-46sr7, namespace: proxy-5242, replica count: 1 I0530 21:40:10.334084 6 runners.go:189] proxy-service-46sr7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 21:40:11.334277 6 runners.go:189] proxy-service-46sr7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 21:40:12.334485 6 runners.go:189] proxy-service-46sr7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0530 21:40:13.334772 6 runners.go:189] proxy-service-46sr7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0530 21:40:14.335018 6 runners.go:189] proxy-service-46sr7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0530 21:40:15.335270 6 runners.go:189] proxy-service-46sr7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0530 21:40:16.335504 6 runners.go:189] proxy-service-46sr7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 30 21:40:16.339: INFO: setup took 7.178961652s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 30 21:40:16.394: INFO: (0) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:1080/proxy/: test<... (200; 55.151304ms) May 30 21:40:16.394: INFO: (0) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:160/proxy/: foo (200; 55.401105ms) May 30 21:40:16.395: INFO: (0) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:160/proxy/: foo (200; 55.655525ms) May 30 21:40:16.395: INFO: (0) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:162/proxy/: bar (200; 55.75978ms) May 30 21:40:16.395: INFO: (0) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:1080/proxy/: ... (200; 55.953216ms) May 30 21:40:16.395: INFO: (0) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:162/proxy/: bar (200; 55.96631ms) May 30 21:40:16.395: INFO: (0) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7/proxy/: test (200; 56.239091ms) May 30 21:40:16.397: INFO: (0) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname2/proxy/: bar (200; 58.172392ms) May 30 21:40:16.398: INFO: (0) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname2/proxy/: bar (200; 58.65434ms) May 30 21:40:16.398: INFO: (0) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname1/proxy/: foo (200; 58.858039ms) May 30 21:40:16.398: INFO: (0) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname1/proxy/: foo (200; 59.288217ms) May 30 21:40:16.403: INFO: (0) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname1/proxy/: tls baz (200; 64.516539ms) May 30 21:40:16.404: INFO: (0) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: test (200; 5.541667ms) May 30 21:40:16.418: INFO: (1) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:1080/proxy/: ... (200; 5.834415ms) May 30 21:40:16.419: INFO: (1) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:160/proxy/: foo (200; 5.81013ms) May 30 21:40:16.419: INFO: (1) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:162/proxy/: bar (200; 6.164102ms) May 30 21:40:16.419: INFO: (1) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:1080/proxy/: test<... (200; 6.045538ms) May 30 21:40:16.419: INFO: (1) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:460/proxy/: tls baz (200; 5.967236ms) May 30 21:40:16.419: INFO: (1) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname1/proxy/: tls baz (200; 5.989351ms) May 30 21:40:16.419: INFO: (1) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: ... (200; 4.134077ms) May 30 21:40:16.426: INFO: (2) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:1080/proxy/: test<... (200; 4.534532ms) May 30 21:40:16.427: INFO: (2) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7/proxy/: test (200; 4.629172ms) May 30 21:40:16.427: INFO: (2) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:162/proxy/: bar (200; 4.762683ms) May 30 21:40:16.427: INFO: (2) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:162/proxy/: bar (200; 4.774277ms) May 30 21:40:16.427: INFO: (2) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:460/proxy/: tls baz (200; 4.935206ms) May 30 21:40:16.427: INFO: (2) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: test (200; 8.122253ms) May 30 21:40:16.438: INFO: (3) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: test<... (200; 8.417979ms) May 30 21:40:16.438: INFO: (3) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname2/proxy/: bar (200; 8.510698ms) May 30 21:40:16.438: INFO: (3) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:460/proxy/: tls baz (200; 8.463031ms) May 30 21:40:16.438: INFO: (3) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:462/proxy/: tls qux (200; 8.49478ms) May 30 21:40:16.438: INFO: (3) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:162/proxy/: bar (200; 8.500886ms) May 30 21:40:16.439: INFO: (3) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:160/proxy/: foo (200; 9.089097ms) May 30 21:40:16.439: INFO: (3) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:1080/proxy/: ... (200; 9.039356ms) May 30 21:40:16.439: INFO: (3) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname2/proxy/: bar (200; 9.187167ms) May 30 21:40:16.440: INFO: (3) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname1/proxy/: foo (200; 10.090519ms) May 30 21:40:16.440: INFO: (3) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname1/proxy/: foo (200; 10.171492ms) May 30 21:40:16.440: INFO: (3) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname1/proxy/: tls baz (200; 10.335367ms) May 30 21:40:16.440: INFO: (3) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname2/proxy/: tls qux (200; 10.505426ms) May 30 21:40:16.445: INFO: (4) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:160/proxy/: foo (200; 4.911074ms) May 30 21:40:16.445: INFO: (4) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:1080/proxy/: test<... (200; 4.933431ms) May 30 21:40:16.445: INFO: (4) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:162/proxy/: bar (200; 4.978772ms) May 30 21:40:16.445: INFO: (4) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:462/proxy/: tls qux (200; 5.267211ms) May 30 21:40:16.445: INFO: (4) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7/proxy/: test (200; 5.247823ms) May 30 21:40:16.446: INFO: (4) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:162/proxy/: bar (200; 5.341151ms) May 30 21:40:16.446: INFO: (4) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:1080/proxy/: ... (200; 5.475725ms) May 30 21:40:16.446: INFO: (4) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: ... (200; 2.485533ms) May 30 21:40:16.450: INFO: (5) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:160/proxy/: foo (200; 2.895129ms) May 30 21:40:16.450: INFO: (5) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7/proxy/: test (200; 3.48291ms) May 30 21:40:16.450: INFO: (5) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:1080/proxy/: test<... (200; 3.53092ms) May 30 21:40:16.450: INFO: (5) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:162/proxy/: bar (200; 3.606424ms) May 30 21:40:16.451: INFO: (5) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:162/proxy/: bar (200; 3.785861ms) May 30 21:40:16.451: INFO: (5) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: test (200; 3.436694ms) May 30 21:40:16.456: INFO: (6) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:160/proxy/: foo (200; 4.059136ms) May 30 21:40:16.457: INFO: (6) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:1080/proxy/: test<... (200; 4.475864ms) May 30 21:40:16.457: INFO: (6) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:460/proxy/: tls baz (200; 4.477949ms) May 30 21:40:16.457: INFO: (6) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: ... (200; 5.400374ms) May 30 21:40:16.458: INFO: (6) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname2/proxy/: bar (200; 5.520916ms) May 30 21:40:16.458: INFO: (6) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname1/proxy/: foo (200; 5.508623ms) May 30 21:40:16.458: INFO: (6) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname2/proxy/: bar (200; 5.696958ms) May 30 21:40:16.458: INFO: (6) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname1/proxy/: tls baz (200; 5.703067ms) May 30 21:40:16.461: INFO: (7) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:162/proxy/: bar (200; 2.935937ms) May 30 21:40:16.461: INFO: (7) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:1080/proxy/: ... (200; 3.548382ms) May 30 21:40:16.462: INFO: (7) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:160/proxy/: foo (200; 3.599898ms) May 30 21:40:16.462: INFO: (7) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:460/proxy/: tls baz (200; 3.654211ms) May 30 21:40:16.462: INFO: (7) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:162/proxy/: bar (200; 4.345801ms) May 30 21:40:16.462: INFO: (7) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname1/proxy/: tls baz (200; 4.449631ms) May 30 21:40:16.462: INFO: (7) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:160/proxy/: foo (200; 4.474669ms) May 30 21:40:16.462: INFO: (7) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:1080/proxy/: test<... (200; 4.516184ms) May 30 21:40:16.462: INFO: (7) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: test (200; 4.510291ms) May 30 21:40:16.462: INFO: (7) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname1/proxy/: foo (200; 4.549502ms) May 30 21:40:16.462: INFO: (7) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname2/proxy/: bar (200; 4.580036ms) May 30 21:40:16.463: INFO: (7) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname1/proxy/: foo (200; 4.646153ms) May 30 21:40:16.463: INFO: (7) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:462/proxy/: tls qux (200; 4.699744ms) May 30 21:40:16.463: INFO: (7) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname2/proxy/: bar (200; 4.695211ms) May 30 21:40:16.463: INFO: (7) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname2/proxy/: tls qux (200; 4.980716ms) May 30 21:40:16.466: INFO: (8) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:1080/proxy/: test<... (200; 2.492075ms) May 30 21:40:16.467: INFO: (8) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:160/proxy/: foo (200; 3.615207ms) May 30 21:40:16.467: INFO: (8) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7/proxy/: test (200; 3.783186ms) May 30 21:40:16.467: INFO: (8) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname2/proxy/: tls qux (200; 4.215627ms) May 30 21:40:16.467: INFO: (8) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname1/proxy/: foo (200; 4.403268ms) May 30 21:40:16.467: INFO: (8) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname1/proxy/: tls baz (200; 4.488211ms) May 30 21:40:16.468: INFO: (8) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname2/proxy/: bar (200; 4.442205ms) May 30 21:40:16.468: INFO: (8) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname1/proxy/: foo (200; 4.43714ms) May 30 21:40:16.468: INFO: (8) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: ... (200; 4.639578ms) May 30 21:40:16.468: INFO: (8) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:160/proxy/: foo (200; 4.644054ms) May 30 21:40:16.468: INFO: (8) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:162/proxy/: bar (200; 4.729915ms) May 30 21:40:16.468: INFO: (8) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:162/proxy/: bar (200; 4.726121ms) May 30 21:40:16.468: INFO: (8) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:462/proxy/: tls qux (200; 4.853101ms) May 30 21:40:16.468: INFO: (8) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:460/proxy/: tls baz (200; 4.840411ms) May 30 21:40:16.471: INFO: (9) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:1080/proxy/: ... (200; 3.133368ms) May 30 21:40:16.472: INFO: (9) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:160/proxy/: foo (200; 3.679118ms) May 30 21:40:16.472: INFO: (9) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:162/proxy/: bar (200; 4.295067ms) May 30 21:40:16.472: INFO: (9) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:1080/proxy/: test<... (200; 4.311352ms) May 30 21:40:16.473: INFO: (9) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:162/proxy/: bar (200; 4.397201ms) May 30 21:40:16.473: INFO: (9) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:462/proxy/: tls qux (200; 4.495136ms) May 30 21:40:16.473: INFO: (9) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7/proxy/: test (200; 4.487183ms) May 30 21:40:16.473: INFO: (9) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:460/proxy/: tls baz (200; 4.484237ms) May 30 21:40:16.473: INFO: (9) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:160/proxy/: foo (200; 4.47351ms) May 30 21:40:16.473: INFO: (9) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: ... (200; 10.168262ms) May 30 21:40:16.485: INFO: (10) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:160/proxy/: foo (200; 10.509645ms) May 30 21:40:16.485: INFO: (10) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:162/proxy/: bar (200; 10.85622ms) May 30 21:40:16.485: INFO: (10) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:160/proxy/: foo (200; 10.76093ms) May 30 21:40:16.486: INFO: (10) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:460/proxy/: tls baz (200; 11.291533ms) May 30 21:40:16.486: INFO: (10) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: test (200; 22.868796ms) May 30 21:40:16.497: INFO: (10) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname1/proxy/: foo (200; 22.926257ms) May 30 21:40:16.497: INFO: (10) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname2/proxy/: tls qux (200; 22.925622ms) May 30 21:40:16.497: INFO: (10) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname2/proxy/: bar (200; 22.943869ms) May 30 21:40:16.497: INFO: (10) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname1/proxy/: foo (200; 22.978237ms) May 30 21:40:16.508: INFO: (10) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname1/proxy/: tls baz (200; 33.939772ms) May 30 21:40:16.508: INFO: (10) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:1080/proxy/: test<... (200; 33.861309ms) May 30 21:40:16.515: INFO: (11) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:1080/proxy/: test<... (200; 6.313402ms) May 30 21:40:16.515: INFO: (11) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:162/proxy/: bar (200; 6.320123ms) May 30 21:40:16.515: INFO: (11) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:462/proxy/: tls qux (200; 6.47112ms) May 30 21:40:16.515: INFO: (11) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname2/proxy/: bar (200; 6.315828ms) May 30 21:40:16.515: INFO: (11) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:460/proxy/: tls baz (200; 6.261675ms) May 30 21:40:16.515: INFO: (11) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7/proxy/: test (200; 6.349186ms) May 30 21:40:16.515: INFO: (11) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:1080/proxy/: ... (200; 6.620306ms) May 30 21:40:16.516: INFO: (11) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:162/proxy/: bar (200; 7.736679ms) May 30 21:40:16.516: INFO: (11) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:160/proxy/: foo (200; 7.790294ms) May 30 21:40:16.516: INFO: (11) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:160/proxy/: foo (200; 7.871745ms) May 30 21:40:16.516: INFO: (11) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: test (200; 3.008502ms) May 30 21:40:16.520: INFO: (12) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:162/proxy/: bar (200; 2.996761ms) May 30 21:40:16.520: INFO: (12) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:1080/proxy/: test<... (200; 3.097065ms) May 30 21:40:16.520: INFO: (12) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:162/proxy/: bar (200; 3.040483ms) May 30 21:40:16.521: INFO: (12) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:160/proxy/: foo (200; 3.869996ms) May 30 21:40:16.521: INFO: (12) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:160/proxy/: foo (200; 3.973869ms) May 30 21:40:16.522: INFO: (12) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:460/proxy/: tls baz (200; 4.331123ms) May 30 21:40:16.522: INFO: (12) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: ... (200; 4.540624ms) May 30 21:40:16.522: INFO: (12) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname1/proxy/: foo (200; 5.011238ms) May 30 21:40:16.522: INFO: (12) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname2/proxy/: bar (200; 4.998504ms) May 30 21:40:16.523: INFO: (12) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname1/proxy/: foo (200; 5.189785ms) May 30 21:40:16.523: INFO: (12) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname1/proxy/: tls baz (200; 5.16712ms) May 30 21:40:16.523: INFO: (12) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname2/proxy/: tls qux (200; 5.156686ms) May 30 21:40:16.523: INFO: (12) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname2/proxy/: bar (200; 5.19742ms) May 30 21:40:16.526: INFO: (13) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:160/proxy/: foo (200; 3.441471ms) May 30 21:40:16.526: INFO: (13) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:160/proxy/: foo (200; 3.620114ms) May 30 21:40:16.527: INFO: (13) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:162/proxy/: bar (200; 4.252952ms) May 30 21:40:16.527: INFO: (13) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: ... (200; 4.252504ms) May 30 21:40:16.527: INFO: (13) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7/proxy/: test (200; 4.320094ms) May 30 21:40:16.527: INFO: (13) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:1080/proxy/: test<... (200; 4.29018ms) May 30 21:40:16.527: INFO: (13) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:162/proxy/: bar (200; 4.425029ms) May 30 21:40:16.527: INFO: (13) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:460/proxy/: tls baz (200; 4.396091ms) May 30 21:40:16.528: INFO: (13) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname2/proxy/: bar (200; 5.67007ms) May 30 21:40:16.528: INFO: (13) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname2/proxy/: bar (200; 5.672911ms) May 30 21:40:16.528: INFO: (13) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname1/proxy/: foo (200; 5.640265ms) May 30 21:40:16.528: INFO: (13) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname1/proxy/: foo (200; 5.653958ms) May 30 21:40:16.528: INFO: (13) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname2/proxy/: tls qux (200; 5.644587ms) May 30 21:40:16.528: INFO: (13) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname1/proxy/: tls baz (200; 5.693529ms) May 30 21:40:16.529: INFO: (13) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:462/proxy/: tls qux (200; 6.736122ms) May 30 21:40:16.544: INFO: (14) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:1080/proxy/: ... (200; 14.719082ms) May 30 21:40:16.544: INFO: (14) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:460/proxy/: tls baz (200; 14.943266ms) May 30 21:40:16.544: INFO: (14) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7/proxy/: test (200; 15.03146ms) May 30 21:40:16.544: INFO: (14) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:162/proxy/: bar (200; 15.034405ms) May 30 21:40:16.545: INFO: (14) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: test<... (200; 17.858686ms) May 30 21:40:16.547: INFO: (14) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname1/proxy/: foo (200; 17.789958ms) May 30 21:40:16.551: INFO: (15) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:162/proxy/: bar (200; 3.603951ms) May 30 21:40:16.551: INFO: (15) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: ... (200; 4.008559ms) May 30 21:40:16.551: INFO: (15) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname2/proxy/: tls qux (200; 4.040695ms) May 30 21:40:16.551: INFO: (15) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:160/proxy/: foo (200; 4.089556ms) May 30 21:40:16.551: INFO: (15) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7/proxy/: test (200; 4.005848ms) May 30 21:40:16.552: INFO: (15) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:1080/proxy/: test<... (200; 4.009069ms) May 30 21:40:16.551: INFO: (15) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:160/proxy/: foo (200; 4.090432ms) May 30 21:40:16.552: INFO: (15) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:460/proxy/: tls baz (200; 4.102804ms) May 30 21:40:16.552: INFO: (15) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname2/proxy/: bar (200; 4.617329ms) May 30 21:40:16.552: INFO: (15) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname2/proxy/: bar (200; 4.969267ms) May 30 21:40:16.552: INFO: (15) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname1/proxy/: foo (200; 4.949228ms) May 30 21:40:16.552: INFO: (15) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname1/proxy/: tls baz (200; 4.897094ms) May 30 21:40:16.552: INFO: (15) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname1/proxy/: foo (200; 4.938079ms) May 30 21:40:16.556: INFO: (16) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:160/proxy/: foo (200; 3.100823ms) May 30 21:40:16.556: INFO: (16) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:162/proxy/: bar (200; 3.13747ms) May 30 21:40:16.556: INFO: (16) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:162/proxy/: bar (200; 3.111608ms) May 30 21:40:16.556: INFO: (16) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:160/proxy/: foo (200; 3.132384ms) May 30 21:40:16.556: INFO: (16) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7/proxy/: test (200; 3.129283ms) May 30 21:40:16.556: INFO: (16) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:460/proxy/: tls baz (200; 3.771886ms) May 30 21:40:16.557: INFO: (16) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:1080/proxy/: ... (200; 3.949597ms) May 30 21:40:16.557: INFO: (16) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: test<... (200; 4.02602ms) May 30 21:40:16.557: INFO: (16) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname1/proxy/: foo (200; 4.266103ms) May 30 21:40:16.557: INFO: (16) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname1/proxy/: foo (200; 4.4166ms) May 30 21:40:16.557: INFO: (16) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname2/proxy/: bar (200; 4.378111ms) May 30 21:40:16.557: INFO: (16) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:462/proxy/: tls qux (200; 4.517639ms) May 30 21:40:16.557: INFO: (16) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname2/proxy/: bar (200; 4.523252ms) May 30 21:40:16.557: INFO: (16) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname1/proxy/: tls baz (200; 4.549792ms) May 30 21:40:16.557: INFO: (16) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname2/proxy/: tls qux (200; 4.76127ms) May 30 21:40:16.559: INFO: (17) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:162/proxy/: bar (200; 1.887771ms) May 30 21:40:16.559: INFO: (17) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:162/proxy/: bar (200; 1.859069ms) May 30 21:40:16.559: INFO: (17) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: test<... (200; 2.517929ms) May 30 21:40:16.560: INFO: (17) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:462/proxy/: tls qux (200; 2.578653ms) May 30 21:40:16.560: INFO: (17) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7/proxy/: test (200; 2.954393ms) May 30 21:40:16.560: INFO: (17) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:460/proxy/: tls baz (200; 2.956002ms) May 30 21:40:16.560: INFO: (17) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:1080/proxy/: ... (200; 3.015907ms) May 30 21:40:16.560: INFO: (17) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:160/proxy/: foo (200; 2.942892ms) May 30 21:40:16.561: INFO: (17) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname1/proxy/: tls baz (200; 3.642285ms) May 30 21:40:16.561: INFO: (17) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname2/proxy/: bar (200; 3.671566ms) May 30 21:40:16.561: INFO: (17) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname1/proxy/: foo (200; 3.688064ms) May 30 21:40:16.561: INFO: (17) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname1/proxy/: foo (200; 4.083271ms) May 30 21:40:16.561: INFO: (17) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname2/proxy/: tls qux (200; 4.099068ms) May 30 21:40:16.561: INFO: (17) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname2/proxy/: bar (200; 4.086128ms) May 30 21:40:16.565: INFO: (18) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:162/proxy/: bar (200; 3.599725ms) May 30 21:40:16.565: INFO: (18) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7/proxy/: test (200; 3.664336ms) May 30 21:40:16.565: INFO: (18) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:160/proxy/: foo (200; 3.67368ms) May 30 21:40:16.565: INFO: (18) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:460/proxy/: tls baz (200; 3.769534ms) May 30 21:40:16.565: INFO: (18) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:1080/proxy/: test<... (200; 3.715042ms) May 30 21:40:16.565: INFO: (18) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:1080/proxy/: ... (200; 3.747236ms) May 30 21:40:16.565: INFO: (18) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:162/proxy/: bar (200; 3.750003ms) May 30 21:40:16.565: INFO: (18) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:462/proxy/: tls qux (200; 3.790051ms) May 30 21:40:16.565: INFO: (18) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:160/proxy/: foo (200; 3.748399ms) May 30 21:40:16.565: INFO: (18) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: test<... (200; 2.903648ms) May 30 21:40:16.570: INFO: (19) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:162/proxy/: bar (200; 3.709957ms) May 30 21:40:16.570: INFO: (19) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname2/proxy/: tls qux (200; 3.821983ms) May 30 21:40:16.570: INFO: (19) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:1080/proxy/: ... (200; 3.81514ms) May 30 21:40:16.570: INFO: (19) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:160/proxy/: foo (200; 3.888687ms) May 30 21:40:16.570: INFO: (19) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:462/proxy/: tls qux (200; 3.895167ms) May 30 21:40:16.570: INFO: (19) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname2/proxy/: bar (200; 3.855793ms) May 30 21:40:16.570: INFO: (19) /api/v1/namespaces/proxy-5242/pods/http:proxy-service-46sr7-m44k7:162/proxy/: bar (200; 3.943639ms) May 30 21:40:16.570: INFO: (19) /api/v1/namespaces/proxy-5242/pods/https:proxy-service-46sr7-m44k7:443/proxy/: test (200; 3.915319ms) May 30 21:40:16.570: INFO: (19) /api/v1/namespaces/proxy-5242/pods/proxy-service-46sr7-m44k7:160/proxy/: foo (200; 3.891364ms) May 30 21:40:16.571: INFO: (19) /api/v1/namespaces/proxy-5242/services/proxy-service-46sr7:portname1/proxy/: foo (200; 4.242729ms) May 30 21:40:16.571: INFO: (19) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname1/proxy/: foo (200; 4.222758ms) May 30 21:40:16.571: INFO: (19) /api/v1/namespaces/proxy-5242/services/http:proxy-service-46sr7:portname2/proxy/: bar (200; 4.243213ms) May 30 21:40:16.571: INFO: (19) /api/v1/namespaces/proxy-5242/services/https:proxy-service-46sr7:tlsportname1/proxy/: tls baz (200; 4.345785ms) STEP: deleting ReplicationController proxy-service-46sr7 in namespace proxy-5242, will wait for the garbage collector to delete the pods May 30 21:40:16.628: INFO: Deleting ReplicationController proxy-service-46sr7 took: 6.047401ms May 30 21:40:16.929: INFO: Terminating ReplicationController proxy-service-46sr7 pods took: 300.476779ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:40:20.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5242" for this suite. • [SLOW TEST:11.188 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":78,"skipped":1198,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:40:20.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0530 21:40:50.938866 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 30 21:40:50.938: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:40:50.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3502" for this suite. • [SLOW TEST:30.807 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":79,"skipped":1204,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:40:50.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:41:07.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6693" for this suite. • [SLOW TEST:16.395 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":80,"skipped":1219,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:41:07.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 21:41:07.871: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 21:41:09.882: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471667, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471667, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471667, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471667, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 21:41:12.914: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:41:23.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6420" for this suite. STEP: Destroying namespace "webhook-6420-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.927 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":81,"skipped":1221,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:41:23.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 30 21:41:23.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5235' May 30 21:41:23.455: INFO: stderr: "" May 30 21:41:23.455: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 30 21:41:28.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5235 -o json' May 30 21:41:28.601: INFO: stderr: "" May 30 21:41:28.601: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-30T21:41:23Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5235\",\n \"resourceVersion\": \"20432430\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5235/pods/e2e-test-httpd-pod\",\n \"uid\": \"a598d880-2742-4088-914b-51eea5c90898\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-zz6bc\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-zz6bc\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-zz6bc\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-30T21:41:23Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-30T21:41:27Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-30T21:41:27Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-30T21:41:23Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://954c8326f3fb8b186a1e891d2d0010e042aea9c9ef59df9379c4f03d67d7cc01\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-30T21:41:26Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.10\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.10\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-30T21:41:23Z\"\n }\n}\n" STEP: replace the image in the pod May 30 21:41:28.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5235' May 30 21:41:31.199: INFO: stderr: "" May 30 21:41:31.199: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 May 30 21:41:31.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5235' May 30 21:41:39.507: INFO: stderr: "" May 30 21:41:39.507: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:41:39.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5235" for this suite. • [SLOW TEST:16.245 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":82,"skipped":1224,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:41:39.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 21:41:40.146: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 21:41:42.156: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471700, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471700, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471700, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471700, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 21:41:45.199: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:41:45.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4669" for this suite. STEP: Destroying namespace "webhook-4669-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.062 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":83,"skipped":1229,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:41:45.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:41:45.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-5541" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":84,"skipped":1246,"failed":0} SSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:41:45.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 30 21:41:52.245: INFO: Successfully updated pod "adopt-release-b827s" STEP: Checking that the Job readopts the Pod May 30 21:41:52.245: INFO: Waiting up to 15m0s for pod "adopt-release-b827s" in namespace "job-182" to be "adopted" May 30 21:41:52.249: INFO: Pod "adopt-release-b827s": Phase="Running", Reason="", readiness=true. Elapsed: 4.374863ms May 30 21:41:54.254: INFO: Pod "adopt-release-b827s": Phase="Running", Reason="", readiness=true. Elapsed: 2.009213467s May 30 21:41:54.254: INFO: Pod "adopt-release-b827s" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 30 21:41:54.762: INFO: Successfully updated pod "adopt-release-b827s" STEP: Checking that the Job releases the Pod May 30 21:41:54.762: INFO: Waiting up to 15m0s for pod "adopt-release-b827s" in namespace "job-182" to be "released" May 30 21:41:54.776: INFO: Pod "adopt-release-b827s": Phase="Running", Reason="", readiness=true. Elapsed: 13.949559ms May 30 21:41:56.780: INFO: Pod "adopt-release-b827s": Phase="Running", Reason="", readiness=true. Elapsed: 2.017872606s May 30 21:41:56.780: INFO: Pod "adopt-release-b827s" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:41:56.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-182" for this suite. • [SLOW TEST:11.144 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":85,"skipped":1250,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:41:56.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 30 21:41:57.711: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 30 21:41:59.721: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471717, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471717, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471717, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471717, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 21:42:02.755: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:42:02.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:42:04.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-111" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.443 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":86,"skipped":1273,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:42:04.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:42:09.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4833" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":87,"skipped":1290,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:42:09.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 30 21:42:09.351: INFO: Pod name pod-release: Found 0 pods out of 1 May 30 21:42:14.355: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:42:15.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1966" for this suite. • [SLOW TEST:6.234 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":88,"skipped":1300,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:42:15.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6654.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6654.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 30 21:42:22.038: INFO: DNS probes using dns-6654/dns-test-26e9bc08-b0a8-431d-9860-7bd9fd4b5aec succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:42:22.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6654" for this suite. • [SLOW TEST:7.388 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":89,"skipped":1413,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:42:22.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5031 STEP: creating a selector STEP: Creating the service pods in kubernetes May 30 21:42:22.822: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 30 21:42:49.009: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.16:8080/dial?request=hostname&protocol=udp&host=10.244.1.225&port=8081&tries=1'] Namespace:pod-network-test-5031 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 21:42:49.009: INFO: >>> kubeConfig: /root/.kube/config I0530 21:42:49.076857 6 log.go:172] (0xc00293c2c0) (0xc002882280) Create stream I0530 21:42:49.076891 6 log.go:172] (0xc00293c2c0) (0xc002882280) Stream added, broadcasting: 1 I0530 21:42:49.079532 6 log.go:172] (0xc00293c2c0) Reply frame received for 1 I0530 21:42:49.079588 6 log.go:172] (0xc00293c2c0) (0xc0023519a0) Create stream I0530 21:42:49.079605 6 log.go:172] (0xc00293c2c0) (0xc0023519a0) Stream added, broadcasting: 3 I0530 21:42:49.080492 6 log.go:172] (0xc00293c2c0) Reply frame received for 3 I0530 21:42:49.080532 6 log.go:172] (0xc00293c2c0) (0xc001d675e0) Create stream I0530 21:42:49.080548 6 log.go:172] (0xc00293c2c0) (0xc001d675e0) Stream added, broadcasting: 5 I0530 21:42:49.081738 6 log.go:172] (0xc00293c2c0) Reply frame received for 5 I0530 21:42:49.323902 6 log.go:172] (0xc00293c2c0) Data frame received for 3 I0530 21:42:49.323928 6 log.go:172] (0xc0023519a0) (3) Data frame handling I0530 21:42:49.323952 6 log.go:172] (0xc0023519a0) (3) Data frame sent I0530 21:42:49.324841 6 log.go:172] (0xc00293c2c0) Data frame received for 5 I0530 21:42:49.324862 6 log.go:172] (0xc001d675e0) (5) Data frame handling I0530 21:42:49.325082 6 log.go:172] (0xc00293c2c0) Data frame received for 3 I0530 21:42:49.325290 6 log.go:172] (0xc0023519a0) (3) Data frame handling I0530 21:42:49.327386 6 log.go:172] (0xc00293c2c0) Data frame received for 1 I0530 21:42:49.327409 6 log.go:172] (0xc002882280) (1) Data frame handling I0530 21:42:49.327437 6 log.go:172] (0xc002882280) (1) Data frame sent I0530 21:42:49.327457 6 log.go:172] (0xc00293c2c0) (0xc002882280) Stream removed, broadcasting: 1 I0530 21:42:49.327480 6 log.go:172] (0xc00293c2c0) Go away received I0530 21:42:49.327599 6 log.go:172] (0xc00293c2c0) (0xc002882280) Stream removed, broadcasting: 1 I0530 21:42:49.327626 6 log.go:172] (0xc00293c2c0) (0xc0023519a0) Stream removed, broadcasting: 3 I0530 21:42:49.327639 6 log.go:172] (0xc00293c2c0) (0xc001d675e0) Stream removed, broadcasting: 5 May 30 21:42:49.327: INFO: Waiting for responses: map[] May 30 21:42:49.331: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.16:8080/dial?request=hostname&protocol=udp&host=10.244.2.15&port=8081&tries=1'] Namespace:pod-network-test-5031 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 21:42:49.331: INFO: >>> kubeConfig: /root/.kube/config I0530 21:42:49.363189 6 log.go:172] (0xc00293c9a0) (0xc0028825a0) Create stream I0530 21:42:49.363267 6 log.go:172] (0xc00293c9a0) (0xc0028825a0) Stream added, broadcasting: 1 I0530 21:42:49.366902 6 log.go:172] (0xc00293c9a0) Reply frame received for 1 I0530 21:42:49.366961 6 log.go:172] (0xc00293c9a0) (0xc002351a40) Create stream I0530 21:42:49.366979 6 log.go:172] (0xc00293c9a0) (0xc002351a40) Stream added, broadcasting: 3 I0530 21:42:49.367947 6 log.go:172] (0xc00293c9a0) Reply frame received for 3 I0530 21:42:49.368000 6 log.go:172] (0xc00293c9a0) (0xc002351b80) Create stream I0530 21:42:49.368014 6 log.go:172] (0xc00293c9a0) (0xc002351b80) Stream added, broadcasting: 5 I0530 21:42:49.368865 6 log.go:172] (0xc00293c9a0) Reply frame received for 5 I0530 21:42:49.440037 6 log.go:172] (0xc00293c9a0) Data frame received for 3 I0530 21:42:49.440069 6 log.go:172] (0xc002351a40) (3) Data frame handling I0530 21:42:49.440083 6 log.go:172] (0xc002351a40) (3) Data frame sent I0530 21:42:49.440659 6 log.go:172] (0xc00293c9a0) Data frame received for 3 I0530 21:42:49.440686 6 log.go:172] (0xc002351a40) (3) Data frame handling I0530 21:42:49.440703 6 log.go:172] (0xc00293c9a0) Data frame received for 5 I0530 21:42:49.440710 6 log.go:172] (0xc002351b80) (5) Data frame handling I0530 21:42:49.442205 6 log.go:172] (0xc00293c9a0) Data frame received for 1 I0530 21:42:49.442308 6 log.go:172] (0xc0028825a0) (1) Data frame handling I0530 21:42:49.442355 6 log.go:172] (0xc0028825a0) (1) Data frame sent I0530 21:42:49.442375 6 log.go:172] (0xc00293c9a0) (0xc0028825a0) Stream removed, broadcasting: 1 I0530 21:42:49.442423 6 log.go:172] (0xc00293c9a0) Go away received I0530 21:42:49.442473 6 log.go:172] (0xc00293c9a0) (0xc0028825a0) Stream removed, broadcasting: 1 I0530 21:42:49.442505 6 log.go:172] (0xc00293c9a0) (0xc002351a40) Stream removed, broadcasting: 3 I0530 21:42:49.442518 6 log.go:172] (0xc00293c9a0) (0xc002351b80) Stream removed, broadcasting: 5 May 30 21:42:49.442: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:42:49.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5031" for this suite. • [SLOW TEST:26.681 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1455,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:42:49.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 30 21:42:50.287: INFO: Pod name wrapped-volume-race-f6a84773-c957-4ee2-b619-34a5fa7b2a55: Found 0 pods out of 5 May 30 21:42:55.380: INFO: Pod name wrapped-volume-race-f6a84773-c957-4ee2-b619-34a5fa7b2a55: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f6a84773-c957-4ee2-b619-34a5fa7b2a55 in namespace emptydir-wrapper-3851, will wait for the garbage collector to delete the pods May 30 21:43:07.801: INFO: Deleting ReplicationController wrapped-volume-race-f6a84773-c957-4ee2-b619-34a5fa7b2a55 took: 8.385063ms May 30 21:43:08.102: INFO: Terminating ReplicationController wrapped-volume-race-f6a84773-c957-4ee2-b619-34a5fa7b2a55 pods took: 300.31231ms STEP: Creating RC which spawns configmap-volume pods May 30 21:43:20.358: INFO: Pod name wrapped-volume-race-a6a70435-5041-40dc-80b6-2bb46d1db50c: Found 0 pods out of 5 May 30 21:43:25.365: INFO: Pod name wrapped-volume-race-a6a70435-5041-40dc-80b6-2bb46d1db50c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a6a70435-5041-40dc-80b6-2bb46d1db50c in namespace emptydir-wrapper-3851, will wait for the garbage collector to delete the pods May 30 21:43:39.459: INFO: Deleting ReplicationController wrapped-volume-race-a6a70435-5041-40dc-80b6-2bb46d1db50c took: 13.115225ms May 30 21:43:39.859: INFO: Terminating ReplicationController wrapped-volume-race-a6a70435-5041-40dc-80b6-2bb46d1db50c pods took: 400.353373ms STEP: Creating RC which spawns configmap-volume pods May 30 21:43:49.994: INFO: Pod name wrapped-volume-race-dae2c05b-4e13-4179-81d1-8aeeaffe8259: Found 0 pods out of 5 May 30 21:43:55.000: INFO: Pod name wrapped-volume-race-dae2c05b-4e13-4179-81d1-8aeeaffe8259: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-dae2c05b-4e13-4179-81d1-8aeeaffe8259 in namespace emptydir-wrapper-3851, will wait for the garbage collector to delete the pods May 30 21:44:09.107: INFO: Deleting ReplicationController wrapped-volume-race-dae2c05b-4e13-4179-81d1-8aeeaffe8259 took: 6.596685ms May 30 21:44:09.408: INFO: Terminating ReplicationController wrapped-volume-race-dae2c05b-4e13-4179-81d1-8aeeaffe8259 pods took: 300.258459ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:44:21.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3851" for this suite. • [SLOW TEST:91.895 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":91,"skipped":1467,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:44:21.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 21:44:21.958: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 21:44:23.970: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471862, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471862, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471862, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471861, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 21:44:27.026: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:44:27.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9576-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:44:27.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8757" for this suite. STEP: Destroying namespace "webhook-8757-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.646 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":92,"skipped":1505,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:44:27.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4443 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-4443 May 30 21:44:28.438: INFO: Found 0 stateful pods, waiting for 1 May 30 21:44:38.443: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 30 21:44:38.462: INFO: Deleting all statefulset in ns statefulset-4443 May 30 21:44:38.468: INFO: Scaling statefulset ss to 0 May 30 21:44:58.542: INFO: Waiting for statefulset status.replicas updated to 0 May 30 21:44:58.544: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:44:58.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4443" for this suite. • [SLOW TEST:30.588 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":93,"skipped":1516,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:44:58.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 21:44:59.534: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 21:45:01.630: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471899, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471899, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471899, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726471899, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 21:45:04.659: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:45:05.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5342" for this suite. STEP: Destroying namespace "webhook-5342-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.683 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":94,"skipped":1522,"failed":0} SSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:45:05.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 30 21:45:09.851: INFO: Successfully updated pod "pod-update-activedeadlineseconds-94cd0ca6-2efa-47c0-b58b-442e538d1051" May 30 21:45:09.851: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-94cd0ca6-2efa-47c0-b58b-442e538d1051" in namespace "pods-4395" to be "terminated due to deadline exceeded" May 30 21:45:09.874: INFO: Pod "pod-update-activedeadlineseconds-94cd0ca6-2efa-47c0-b58b-442e538d1051": Phase="Running", Reason="", readiness=true. Elapsed: 23.68859ms May 30 21:45:11.879: INFO: Pod "pod-update-activedeadlineseconds-94cd0ca6-2efa-47c0-b58b-442e538d1051": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.028515017s May 30 21:45:11.879: INFO: Pod "pod-update-activedeadlineseconds-94cd0ca6-2efa-47c0-b58b-442e538d1051" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:45:11.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4395" for this suite. • [SLOW TEST:6.626 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1526,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:45:11.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod May 30 21:45:16.018: INFO: Pod pod-hostip-52f3de17-dbfc-4908-88e6-b57fea3d7efd has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:45:16.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9289" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1539,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:45:16.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 30 21:45:16.082: INFO: Waiting up to 5m0s for pod "downward-api-3749a421-22ed-4fad-9b46-b6de54ae9523" in namespace "downward-api-63" to be "success or failure" May 30 21:45:16.086: INFO: Pod "downward-api-3749a421-22ed-4fad-9b46-b6de54ae9523": Phase="Pending", Reason="", readiness=false. Elapsed: 3.367225ms May 30 21:45:18.103: INFO: Pod "downward-api-3749a421-22ed-4fad-9b46-b6de54ae9523": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020391737s May 30 21:45:20.107: INFO: Pod "downward-api-3749a421-22ed-4fad-9b46-b6de54ae9523": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024464945s STEP: Saw pod success May 30 21:45:20.107: INFO: Pod "downward-api-3749a421-22ed-4fad-9b46-b6de54ae9523" satisfied condition "success or failure" May 30 21:45:20.109: INFO: Trying to get logs from node jerma-worker2 pod downward-api-3749a421-22ed-4fad-9b46-b6de54ae9523 container dapi-container: STEP: delete the pod May 30 21:45:20.145: INFO: Waiting for pod downward-api-3749a421-22ed-4fad-9b46-b6de54ae9523 to disappear May 30 21:45:20.175: INFO: Pod downward-api-3749a421-22ed-4fad-9b46-b6de54ae9523 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:45:20.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-63" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1582,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:45:20.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:45:36.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5785" for this suite. • [SLOW TEST:16.267 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":98,"skipped":1605,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:45:36.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-9668d819-d51c-4a32-bd63-6194ab42249c STEP: Creating a pod to test consume configMaps May 30 21:45:36.570: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a5c40dd9-9f1f-405f-889b-d5e1b2adeb4c" in namespace "projected-8050" to be "success or failure" May 30 21:45:36.602: INFO: Pod "pod-projected-configmaps-a5c40dd9-9f1f-405f-889b-d5e1b2adeb4c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.13516ms May 30 21:45:38.607: INFO: Pod "pod-projected-configmaps-a5c40dd9-9f1f-405f-889b-d5e1b2adeb4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036618897s May 30 21:45:40.611: INFO: Pod "pod-projected-configmaps-a5c40dd9-9f1f-405f-889b-d5e1b2adeb4c": Phase="Running", Reason="", readiness=true. Elapsed: 4.040589364s May 30 21:45:42.615: INFO: Pod "pod-projected-configmaps-a5c40dd9-9f1f-405f-889b-d5e1b2adeb4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045266932s STEP: Saw pod success May 30 21:45:42.615: INFO: Pod "pod-projected-configmaps-a5c40dd9-9f1f-405f-889b-d5e1b2adeb4c" satisfied condition "success or failure" May 30 21:45:42.619: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-a5c40dd9-9f1f-405f-889b-d5e1b2adeb4c container projected-configmap-volume-test: STEP: delete the pod May 30 21:45:42.680: INFO: Waiting for pod pod-projected-configmaps-a5c40dd9-9f1f-405f-889b-d5e1b2adeb4c to disappear May 30 21:45:42.685: INFO: Pod pod-projected-configmaps-a5c40dd9-9f1f-405f-889b-d5e1b2adeb4c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:45:42.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8050" for this suite. • [SLOW TEST:6.242 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1643,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:45:42.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 30 21:45:42.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-3517' May 30 21:45:45.688: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 30 21:45:45.688: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 May 30 21:45:49.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3517' May 30 21:45:50.012: INFO: stderr: "" May 30 21:45:50.012: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:45:50.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3517" for this suite. • [SLOW TEST:7.327 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1622 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":100,"skipped":1669,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:45:50.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 30 21:45:58.198: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 30 21:45:58.207: INFO: Pod pod-with-prestop-http-hook still exists May 30 21:46:00.207: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 30 21:46:00.211: INFO: Pod pod-with-prestop-http-hook still exists May 30 21:46:02.207: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 30 21:46:02.211: INFO: Pod pod-with-prestop-http-hook still exists May 30 21:46:04.207: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 30 21:46:04.210: INFO: Pod pod-with-prestop-http-hook still exists May 30 21:46:06.207: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 30 21:46:06.210: INFO: Pod pod-with-prestop-http-hook still exists May 30 21:46:08.207: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 30 21:46:08.211: INFO: Pod pod-with-prestop-http-hook still exists May 30 21:46:10.207: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 30 21:46:10.211: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:46:10.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6306" for this suite. • [SLOW TEST:20.205 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1680,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:46:10.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 30 21:46:10.328: INFO: Waiting up to 5m0s for pod "downwardapi-volume-17803900-645a-4ae1-afa3-46bfde71d7f0" in namespace "downward-api-3785" to be "success or failure" May 30 21:46:10.433: INFO: Pod "downwardapi-volume-17803900-645a-4ae1-afa3-46bfde71d7f0": Phase="Pending", Reason="", readiness=false. Elapsed: 104.499619ms May 30 21:46:12.437: INFO: Pod "downwardapi-volume-17803900-645a-4ae1-afa3-46bfde71d7f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10814799s May 30 21:46:14.440: INFO: Pod "downwardapi-volume-17803900-645a-4ae1-afa3-46bfde71d7f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111598835s STEP: Saw pod success May 30 21:46:14.440: INFO: Pod "downwardapi-volume-17803900-645a-4ae1-afa3-46bfde71d7f0" satisfied condition "success or failure" May 30 21:46:14.443: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-17803900-645a-4ae1-afa3-46bfde71d7f0 container client-container: STEP: delete the pod May 30 21:46:14.472: INFO: Waiting for pod downwardapi-volume-17803900-645a-4ae1-afa3-46bfde71d7f0 to disappear May 30 21:46:14.529: INFO: Pod downwardapi-volume-17803900-645a-4ae1-afa3-46bfde71d7f0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:46:14.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3785" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1693,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:46:14.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 30 21:46:14.582: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d98e7372-614f-4896-8a16-2ee45fc655e9" in namespace "projected-3204" to be "success or failure" May 30 21:46:14.606: INFO: Pod "downwardapi-volume-d98e7372-614f-4896-8a16-2ee45fc655e9": Phase="Pending", Reason="", readiness=false. Elapsed: 24.351644ms May 30 21:46:16.611: INFO: Pod "downwardapi-volume-d98e7372-614f-4896-8a16-2ee45fc655e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028827076s May 30 21:46:18.655: INFO: Pod "downwardapi-volume-d98e7372-614f-4896-8a16-2ee45fc655e9": Phase="Running", Reason="", readiness=true. Elapsed: 4.072936211s May 30 21:46:20.679: INFO: Pod "downwardapi-volume-d98e7372-614f-4896-8a16-2ee45fc655e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.097169528s STEP: Saw pod success May 30 21:46:20.679: INFO: Pod "downwardapi-volume-d98e7372-614f-4896-8a16-2ee45fc655e9" satisfied condition "success or failure" May 30 21:46:20.683: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d98e7372-614f-4896-8a16-2ee45fc655e9 container client-container: STEP: delete the pod May 30 21:46:20.703: INFO: Waiting for pod downwardapi-volume-d98e7372-614f-4896-8a16-2ee45fc655e9 to disappear May 30 21:46:20.708: INFO: Pod downwardapi-volume-d98e7372-614f-4896-8a16-2ee45fc655e9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:46:20.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3204" for this suite. • [SLOW TEST:6.178 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1704,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:46:20.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command May 30 21:46:20.856: INFO: Waiting up to 5m0s for pod "client-containers-7016bd55-df9f-4c73-9b49-97de379ca98a" in namespace "containers-3762" to be "success or failure" May 30 21:46:20.864: INFO: Pod "client-containers-7016bd55-df9f-4c73-9b49-97de379ca98a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.913377ms May 30 21:46:22.868: INFO: Pod "client-containers-7016bd55-df9f-4c73-9b49-97de379ca98a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012330027s May 30 21:46:24.872: INFO: Pod "client-containers-7016bd55-df9f-4c73-9b49-97de379ca98a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016243408s STEP: Saw pod success May 30 21:46:24.872: INFO: Pod "client-containers-7016bd55-df9f-4c73-9b49-97de379ca98a" satisfied condition "success or failure" May 30 21:46:24.875: INFO: Trying to get logs from node jerma-worker2 pod client-containers-7016bd55-df9f-4c73-9b49-97de379ca98a container test-container: STEP: delete the pod May 30 21:46:24.925: INFO: Waiting for pod client-containers-7016bd55-df9f-4c73-9b49-97de379ca98a to disappear May 30 21:46:24.931: INFO: Pod client-containers-7016bd55-df9f-4c73-9b49-97de379ca98a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:46:24.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3762" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1706,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:46:24.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy May 30 21:46:25.289: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix717286506/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:46:25.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5821" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":105,"skipped":1713,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:46:25.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 30 21:46:29.483: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 30 21:46:34.602: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:46:34.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8454" for this suite. • [SLOW TEST:9.251 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":106,"skipped":1726,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:46:34.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 30 21:46:34.671: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba8898bb-e6cd-43d4-861c-bcba2d8f0b03" in namespace "projected-4970" to be "success or failure" May 30 21:46:34.733: INFO: Pod "downwardapi-volume-ba8898bb-e6cd-43d4-861c-bcba2d8f0b03": Phase="Pending", Reason="", readiness=false. Elapsed: 61.90953ms May 30 21:46:36.737: INFO: Pod "downwardapi-volume-ba8898bb-e6cd-43d4-861c-bcba2d8f0b03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066300133s May 30 21:46:38.741: INFO: Pod "downwardapi-volume-ba8898bb-e6cd-43d4-861c-bcba2d8f0b03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070614629s STEP: Saw pod success May 30 21:46:38.742: INFO: Pod "downwardapi-volume-ba8898bb-e6cd-43d4-861c-bcba2d8f0b03" satisfied condition "success or failure" May 30 21:46:38.745: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-ba8898bb-e6cd-43d4-861c-bcba2d8f0b03 container client-container: STEP: delete the pod May 30 21:46:38.776: INFO: Waiting for pod downwardapi-volume-ba8898bb-e6cd-43d4-861c-bcba2d8f0b03 to disappear May 30 21:46:38.787: INFO: Pod downwardapi-volume-ba8898bb-e6cd-43d4-861c-bcba2d8f0b03 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:46:38.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4970" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1765,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:46:38.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-5e00f630-857e-4dab-96c8-48b90647439a STEP: Creating a pod to test consume secrets May 30 21:46:38.866: INFO: Waiting up to 5m0s for pod "pod-secrets-bcb66634-cc80-484b-b583-5ecbce2244f3" in namespace "secrets-6090" to be "success or failure" May 30 21:46:38.870: INFO: Pod "pod-secrets-bcb66634-cc80-484b-b583-5ecbce2244f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211815ms May 30 21:46:40.874: INFO: Pod "pod-secrets-bcb66634-cc80-484b-b583-5ecbce2244f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008545244s May 30 21:46:42.912: INFO: Pod "pod-secrets-bcb66634-cc80-484b-b583-5ecbce2244f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046619822s STEP: Saw pod success May 30 21:46:42.912: INFO: Pod "pod-secrets-bcb66634-cc80-484b-b583-5ecbce2244f3" satisfied condition "success or failure" May 30 21:46:42.915: INFO: Trying to get logs from node jerma-worker pod pod-secrets-bcb66634-cc80-484b-b583-5ecbce2244f3 container secret-volume-test: STEP: delete the pod May 30 21:46:42.931: INFO: Waiting for pod pod-secrets-bcb66634-cc80-484b-b583-5ecbce2244f3 to disappear May 30 21:46:42.936: INFO: Pod pod-secrets-bcb66634-cc80-484b-b583-5ecbce2244f3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:46:42.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6090" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1772,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:46:42.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:46:43.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8063" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":109,"skipped":1802,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:46:43.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:46:43.317: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 30 21:46:45.379: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:46:46.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6094" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":110,"skipped":1807,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:46:46.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:46:58.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6650" for this suite. • [SLOW TEST:11.688 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":111,"skipped":1810,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:46:58.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 30 21:46:58.347: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5e4c2ff1-3809-4fc8-b5d6-a405f95dd265" in namespace "projected-7292" to be "success or failure" May 30 21:46:58.359: INFO: Pod "downwardapi-volume-5e4c2ff1-3809-4fc8-b5d6-a405f95dd265": Phase="Pending", Reason="", readiness=false. Elapsed: 11.365515ms May 30 21:47:00.363: INFO: Pod "downwardapi-volume-5e4c2ff1-3809-4fc8-b5d6-a405f95dd265": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015159037s May 30 21:47:02.367: INFO: Pod "downwardapi-volume-5e4c2ff1-3809-4fc8-b5d6-a405f95dd265": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01977007s STEP: Saw pod success May 30 21:47:02.367: INFO: Pod "downwardapi-volume-5e4c2ff1-3809-4fc8-b5d6-a405f95dd265" satisfied condition "success or failure" May 30 21:47:02.370: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-5e4c2ff1-3809-4fc8-b5d6-a405f95dd265 container client-container: STEP: delete the pod May 30 21:47:02.400: INFO: Waiting for pod downwardapi-volume-5e4c2ff1-3809-4fc8-b5d6-a405f95dd265 to disappear May 30 21:47:02.411: INFO: Pod downwardapi-volume-5e4c2ff1-3809-4fc8-b5d6-a405f95dd265 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:47:02.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7292" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1842,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:47:02.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-b9f47282-b833-4817-949a-025ca9447d8b [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:47:02.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8447" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":113,"skipped":1892,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:47:02.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 30 21:47:02.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7738' May 30 21:47:06.222: INFO: stderr: "" May 30 21:47:06.222: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 30 21:47:07.229: INFO: Selector matched 1 pods for map[app:agnhost] May 30 21:47:07.229: INFO: Found 0 / 1 May 30 21:47:08.263: INFO: Selector matched 1 pods for map[app:agnhost] May 30 21:47:08.263: INFO: Found 0 / 1 May 30 21:47:09.227: INFO: Selector matched 1 pods for map[app:agnhost] May 30 21:47:09.227: INFO: Found 1 / 1 May 30 21:47:09.227: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 30 21:47:09.229: INFO: Selector matched 1 pods for map[app:agnhost] May 30 21:47:09.229: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 30 21:47:09.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-hv2s8 --namespace=kubectl-7738 -p {"metadata":{"annotations":{"x":"y"}}}' May 30 21:47:09.357: INFO: stderr: "" May 30 21:47:09.357: INFO: stdout: "pod/agnhost-master-hv2s8 patched\n" STEP: checking annotations May 30 21:47:09.488: INFO: Selector matched 1 pods for map[app:agnhost] May 30 21:47:09.488: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:47:09.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7738" for this suite. • [SLOW TEST:6.961 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1432 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":114,"skipped":1900,"failed":0} [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:47:09.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components May 30 21:47:09.554: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 30 21:47:09.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4001' May 30 21:47:13.045: INFO: stderr: "" May 30 21:47:13.045: INFO: stdout: "service/agnhost-slave created\n" May 30 21:47:13.045: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 30 21:47:13.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4001' May 30 21:47:15.705: INFO: stderr: "" May 30 21:47:15.705: INFO: stdout: "service/agnhost-master created\n" May 30 21:47:15.706: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 30 21:47:15.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4001' May 30 21:47:18.424: INFO: stderr: "" May 30 21:47:18.424: INFO: stdout: "service/frontend created\n" May 30 21:47:18.425: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 30 21:47:18.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4001' May 30 21:47:19.565: INFO: stderr: "" May 30 21:47:19.565: INFO: stdout: "deployment.apps/frontend created\n" May 30 21:47:19.565: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 30 21:47:19.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4001' May 30 21:47:19.867: INFO: stderr: "" May 30 21:47:19.867: INFO: stdout: "deployment.apps/agnhost-master created\n" May 30 21:47:19.867: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 30 21:47:19.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4001' May 30 21:47:20.575: INFO: stderr: "" May 30 21:47:20.575: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 30 21:47:20.575: INFO: Waiting for all frontend pods to be Running. May 30 21:47:30.625: INFO: Waiting for frontend to serve content. May 30 21:47:30.635: INFO: Trying to add a new entry to the guestbook. May 30 21:47:30.646: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 30 21:47:30.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4001' May 30 21:47:30.827: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 30 21:47:30.827: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 30 21:47:30.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4001' May 30 21:47:31.008: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 30 21:47:31.008: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 30 21:47:31.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4001' May 30 21:47:31.134: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 30 21:47:31.134: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 30 21:47:31.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4001' May 30 21:47:31.251: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 30 21:47:31.251: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 30 21:47:31.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4001' May 30 21:47:31.812: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 30 21:47:31.812: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 30 21:47:31.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4001' May 30 21:47:32.291: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 30 21:47:32.291: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:47:32.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4001" for this suite. • [SLOW TEST:23.070 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":115,"skipped":1900,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:47:32.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-b3f71e02-7c51-4234-9959-63c6a12b2270 STEP: Creating a pod to test consume configMaps May 30 21:47:32.962: INFO: Waiting up to 5m0s for pod "pod-configmaps-7283050e-ff1d-45ec-bdcc-85058a792bb2" in namespace "configmap-8413" to be "success or failure" May 30 21:47:33.012: INFO: Pod "pod-configmaps-7283050e-ff1d-45ec-bdcc-85058a792bb2": Phase="Pending", Reason="", readiness=false. Elapsed: 50.403886ms May 30 21:47:35.203: INFO: Pod "pod-configmaps-7283050e-ff1d-45ec-bdcc-85058a792bb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241699363s May 30 21:47:37.207: INFO: Pod "pod-configmaps-7283050e-ff1d-45ec-bdcc-85058a792bb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.245548216s STEP: Saw pod success May 30 21:47:37.207: INFO: Pod "pod-configmaps-7283050e-ff1d-45ec-bdcc-85058a792bb2" satisfied condition "success or failure" May 30 21:47:37.210: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-7283050e-ff1d-45ec-bdcc-85058a792bb2 container configmap-volume-test: STEP: delete the pod May 30 21:47:37.272: INFO: Waiting for pod pod-configmaps-7283050e-ff1d-45ec-bdcc-85058a792bb2 to disappear May 30 21:47:37.374: INFO: Pod pod-configmaps-7283050e-ff1d-45ec-bdcc-85058a792bb2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:47:37.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8413" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1901,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:47:37.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8550 STEP: creating a selector STEP: Creating the service pods in kubernetes May 30 21:47:37.421: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 30 21:48:03.628: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.244:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8550 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 21:48:03.628: INFO: >>> kubeConfig: /root/.kube/config I0530 21:48:03.660763 6 log.go:172] (0xc002f5c4d0) (0xc00135d9a0) Create stream I0530 21:48:03.660800 6 log.go:172] (0xc002f5c4d0) (0xc00135d9a0) Stream added, broadcasting: 1 I0530 21:48:03.662702 6 log.go:172] (0xc002f5c4d0) Reply frame received for 1 I0530 21:48:03.662755 6 log.go:172] (0xc002f5c4d0) (0xc00135dae0) Create stream I0530 21:48:03.662775 6 log.go:172] (0xc002f5c4d0) (0xc00135dae0) Stream added, broadcasting: 3 I0530 21:48:03.663613 6 log.go:172] (0xc002f5c4d0) Reply frame received for 3 I0530 21:48:03.663652 6 log.go:172] (0xc002f5c4d0) (0xc00135dea0) Create stream I0530 21:48:03.663663 6 log.go:172] (0xc002f5c4d0) (0xc00135dea0) Stream added, broadcasting: 5 I0530 21:48:03.664536 6 log.go:172] (0xc002f5c4d0) Reply frame received for 5 I0530 21:48:03.804615 6 log.go:172] (0xc002f5c4d0) Data frame received for 5 I0530 21:48:03.804663 6 log.go:172] (0xc00135dea0) (5) Data frame handling I0530 21:48:03.804777 6 log.go:172] (0xc002f5c4d0) Data frame received for 3 I0530 21:48:03.804833 6 log.go:172] (0xc00135dae0) (3) Data frame handling I0530 21:48:03.804853 6 log.go:172] (0xc00135dae0) (3) Data frame sent I0530 21:48:03.804864 6 log.go:172] (0xc002f5c4d0) Data frame received for 3 I0530 21:48:03.804873 6 log.go:172] (0xc00135dae0) (3) Data frame handling I0530 21:48:03.806477 6 log.go:172] (0xc002f5c4d0) Data frame received for 1 I0530 21:48:03.806499 6 log.go:172] (0xc00135d9a0) (1) Data frame handling I0530 21:48:03.806512 6 log.go:172] (0xc00135d9a0) (1) Data frame sent I0530 21:48:03.806538 6 log.go:172] (0xc002f5c4d0) (0xc00135d9a0) Stream removed, broadcasting: 1 I0530 21:48:03.806550 6 log.go:172] (0xc002f5c4d0) Go away received I0530 21:48:03.806663 6 log.go:172] (0xc002f5c4d0) (0xc00135d9a0) Stream removed, broadcasting: 1 I0530 21:48:03.806684 6 log.go:172] (0xc002f5c4d0) (0xc00135dae0) Stream removed, broadcasting: 3 I0530 21:48:03.806691 6 log.go:172] (0xc002f5c4d0) (0xc00135dea0) Stream removed, broadcasting: 5 May 30 21:48:03.806: INFO: Found all expected endpoints: [netserver-0] May 30 21:48:03.809: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.44:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8550 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 21:48:03.810: INFO: >>> kubeConfig: /root/.kube/config I0530 21:48:03.841514 6 log.go:172] (0xc002f5cbb0) (0xc002812500) Create stream I0530 21:48:03.841550 6 log.go:172] (0xc002f5cbb0) (0xc002812500) Stream added, broadcasting: 1 I0530 21:48:03.843179 6 log.go:172] (0xc002f5cbb0) Reply frame received for 1 I0530 21:48:03.843208 6 log.go:172] (0xc002f5cbb0) (0xc0011005a0) Create stream I0530 21:48:03.843217 6 log.go:172] (0xc002f5cbb0) (0xc0011005a0) Stream added, broadcasting: 3 I0530 21:48:03.844129 6 log.go:172] (0xc002f5cbb0) Reply frame received for 3 I0530 21:48:03.844177 6 log.go:172] (0xc002f5cbb0) (0xc0018d5ea0) Create stream I0530 21:48:03.844191 6 log.go:172] (0xc002f5cbb0) (0xc0018d5ea0) Stream added, broadcasting: 5 I0530 21:48:03.844852 6 log.go:172] (0xc002f5cbb0) Reply frame received for 5 I0530 21:48:03.909329 6 log.go:172] (0xc002f5cbb0) Data frame received for 5 I0530 21:48:03.909367 6 log.go:172] (0xc0018d5ea0) (5) Data frame handling I0530 21:48:03.909393 6 log.go:172] (0xc002f5cbb0) Data frame received for 3 I0530 21:48:03.909422 6 log.go:172] (0xc0011005a0) (3) Data frame handling I0530 21:48:03.909443 6 log.go:172] (0xc0011005a0) (3) Data frame sent I0530 21:48:03.909458 6 log.go:172] (0xc002f5cbb0) Data frame received for 3 I0530 21:48:03.909468 6 log.go:172] (0xc0011005a0) (3) Data frame handling I0530 21:48:03.911243 6 log.go:172] (0xc002f5cbb0) Data frame received for 1 I0530 21:48:03.911271 6 log.go:172] (0xc002812500) (1) Data frame handling I0530 21:48:03.911290 6 log.go:172] (0xc002812500) (1) Data frame sent I0530 21:48:03.911309 6 log.go:172] (0xc002f5cbb0) (0xc002812500) Stream removed, broadcasting: 1 I0530 21:48:03.911332 6 log.go:172] (0xc002f5cbb0) Go away received I0530 21:48:03.911448 6 log.go:172] (0xc002f5cbb0) (0xc002812500) Stream removed, broadcasting: 1 I0530 21:48:03.911471 6 log.go:172] (0xc002f5cbb0) (0xc0011005a0) Stream removed, broadcasting: 3 I0530 21:48:03.911487 6 log.go:172] (0xc002f5cbb0) (0xc0018d5ea0) Stream removed, broadcasting: 5 May 30 21:48:03.911: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:48:03.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8550" for this suite. • [SLOW TEST:26.537 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1909,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:48:03.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-9f8f42fc-5495-4c03-86f8-d6be6f902695 STEP: Creating a pod to test consume configMaps May 30 21:48:03.994: INFO: Waiting up to 5m0s for pod "pod-configmaps-934b351a-09b2-480e-8931-c5e003111cc8" in namespace "configmap-9299" to be "success or failure" May 30 21:48:04.034: INFO: Pod "pod-configmaps-934b351a-09b2-480e-8931-c5e003111cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 39.688871ms May 30 21:48:06.038: INFO: Pod "pod-configmaps-934b351a-09b2-480e-8931-c5e003111cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043916238s May 30 21:48:08.042: INFO: Pod "pod-configmaps-934b351a-09b2-480e-8931-c5e003111cc8": Phase="Running", Reason="", readiness=true. Elapsed: 4.048151917s May 30 21:48:10.171: INFO: Pod "pod-configmaps-934b351a-09b2-480e-8931-c5e003111cc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.177374419s STEP: Saw pod success May 30 21:48:10.172: INFO: Pod "pod-configmaps-934b351a-09b2-480e-8931-c5e003111cc8" satisfied condition "success or failure" May 30 21:48:10.202: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-934b351a-09b2-480e-8931-c5e003111cc8 container configmap-volume-test: STEP: delete the pod May 30 21:48:10.358: INFO: Waiting for pod pod-configmaps-934b351a-09b2-480e-8931-c5e003111cc8 to disappear May 30 21:48:10.572: INFO: Pod pod-configmaps-934b351a-09b2-480e-8931-c5e003111cc8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:48:10.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9299" for this suite. • [SLOW TEST:6.739 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1923,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:48:10.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 21:48:11.891: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 21:48:13.901: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472091, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472091, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472092, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472091, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 21:48:16.959: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:48:16.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2850" for this suite. STEP: Destroying namespace "webhook-2850-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.426 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":119,"skipped":1925,"failed":0} [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:48:17.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 30 21:48:25.207: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 30 21:48:25.211: INFO: Pod pod-with-poststart-exec-hook still exists May 30 21:48:27.212: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 30 21:48:27.216: INFO: Pod pod-with-poststart-exec-hook still exists May 30 21:48:29.212: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 30 21:48:29.216: INFO: Pod pod-with-poststart-exec-hook still exists May 30 21:48:31.217: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 30 21:48:31.227: INFO: Pod pod-with-poststart-exec-hook still exists May 30 21:48:33.212: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 30 21:48:33.218: INFO: Pod pod-with-poststart-exec-hook still exists May 30 21:48:35.212: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 30 21:48:35.236: INFO: Pod pod-with-poststart-exec-hook still exists May 30 21:48:37.212: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 30 21:48:37.216: INFO: Pod pod-with-poststart-exec-hook still exists May 30 21:48:39.211: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 30 21:48:39.214: INFO: Pod pod-with-poststart-exec-hook still exists May 30 21:48:41.212: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 30 21:48:41.215: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:48:41.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6988" for this suite. • [SLOW TEST:24.138 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1925,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:48:41.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 21:48:41.700: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 21:48:43.711: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472121, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472121, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472121, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472121, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 21:48:45.722: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472121, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472121, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472121, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472121, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 21:48:48.743: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:48:48.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3585" for this suite. STEP: Destroying namespace "webhook-3585-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.802 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":121,"skipped":1946,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:48:49.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-77d6074d-d032-498a-b8a2-cc9cce3829b7 STEP: Creating a pod to test consume secrets May 30 21:48:49.194: INFO: Waiting up to 5m0s for pod "pod-secrets-e12d26d6-9e72-43bc-a7c2-ece92950ab03" in namespace "secrets-5350" to be "success or failure" May 30 21:48:49.198: INFO: Pod "pod-secrets-e12d26d6-9e72-43bc-a7c2-ece92950ab03": Phase="Pending", Reason="", readiness=false. Elapsed: 3.821432ms May 30 21:48:51.201: INFO: Pod "pod-secrets-e12d26d6-9e72-43bc-a7c2-ece92950ab03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007621674s May 30 21:48:53.208: INFO: Pod "pod-secrets-e12d26d6-9e72-43bc-a7c2-ece92950ab03": Phase="Running", Reason="", readiness=true. Elapsed: 4.013996192s May 30 21:48:55.211: INFO: Pod "pod-secrets-e12d26d6-9e72-43bc-a7c2-ece92950ab03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017711199s STEP: Saw pod success May 30 21:48:55.211: INFO: Pod "pod-secrets-e12d26d6-9e72-43bc-a7c2-ece92950ab03" satisfied condition "success or failure" May 30 21:48:55.231: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-e12d26d6-9e72-43bc-a7c2-ece92950ab03 container secret-volume-test: STEP: delete the pod May 30 21:48:55.263: INFO: Waiting for pod pod-secrets-e12d26d6-9e72-43bc-a7c2-ece92950ab03 to disappear May 30 21:48:55.282: INFO: Pod pod-secrets-e12d26d6-9e72-43bc-a7c2-ece92950ab03 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:48:55.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5350" for this suite. STEP: Destroying namespace "secret-namespace-6569" for this suite. • [SLOW TEST:6.270 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":1949,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:48:55.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-a7eff88e-1006-4a5f-8e3a-f9bb5f438509 STEP: Creating a pod to test consume configMaps May 30 21:48:55.385: INFO: Waiting up to 5m0s for pod "pod-configmaps-c6f333ee-9867-4a0d-a51a-c84e57158c39" in namespace "configmap-156" to be "success or failure" May 30 21:48:55.399: INFO: Pod "pod-configmaps-c6f333ee-9867-4a0d-a51a-c84e57158c39": Phase="Pending", Reason="", readiness=false. Elapsed: 14.717611ms May 30 21:48:57.417: INFO: Pod "pod-configmaps-c6f333ee-9867-4a0d-a51a-c84e57158c39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032296115s May 30 21:48:59.421: INFO: Pod "pod-configmaps-c6f333ee-9867-4a0d-a51a-c84e57158c39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036330408s STEP: Saw pod success May 30 21:48:59.421: INFO: Pod "pod-configmaps-c6f333ee-9867-4a0d-a51a-c84e57158c39" satisfied condition "success or failure" May 30 21:48:59.423: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-c6f333ee-9867-4a0d-a51a-c84e57158c39 container configmap-volume-test: STEP: delete the pod May 30 21:48:59.689: INFO: Waiting for pod pod-configmaps-c6f333ee-9867-4a0d-a51a-c84e57158c39 to disappear May 30 21:48:59.698: INFO: Pod pod-configmaps-c6f333ee-9867-4a0d-a51a-c84e57158c39 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:48:59.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-156" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":1950,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:48:59.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 30 21:49:07.176: INFO: 9 pods remaining May 30 21:49:07.176: INFO: 0 pods has nil DeletionTimestamp May 30 21:49:07.176: INFO: May 30 21:49:08.536: INFO: 0 pods remaining May 30 21:49:08.536: INFO: 0 pods has nil DeletionTimestamp May 30 21:49:08.536: INFO: STEP: Gathering metrics W0530 21:49:09.639954 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 30 21:49:09.640: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:49:09.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6833" for this suite. • [SLOW TEST:9.941 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":124,"skipped":1970,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:49:09.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:49:10.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5605' May 30 21:49:11.559: INFO: stderr: "" May 30 21:49:11.559: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 30 21:49:11.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5605' May 30 21:49:13.439: INFO: stderr: "" May 30 21:49:13.439: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 30 21:49:14.445: INFO: Selector matched 1 pods for map[app:agnhost] May 30 21:49:14.445: INFO: Found 0 / 1 May 30 21:49:15.478: INFO: Selector matched 1 pods for map[app:agnhost] May 30 21:49:15.478: INFO: Found 0 / 1 May 30 21:49:16.443: INFO: Selector matched 1 pods for map[app:agnhost] May 30 21:49:16.443: INFO: Found 1 / 1 May 30 21:49:16.443: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 30 21:49:16.446: INFO: Selector matched 1 pods for map[app:agnhost] May 30 21:49:16.446: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 30 21:49:16.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-9slcf --namespace=kubectl-5605' May 30 21:49:16.566: INFO: stderr: "" May 30 21:49:16.566: INFO: stdout: "Name: agnhost-master-9slcf\nNamespace: kubectl-5605\nPriority: 0\nNode: jerma-worker/172.17.0.10\nStart Time: Sat, 30 May 2020 21:49:12 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.2\nIPs:\n IP: 10.244.1.2\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://4aa5949db74f25b25a773e9832cdc37afc5a3e24e5e2d901c987be1ec802b58f\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 30 May 2020 21:49:15 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-pr8vt (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-pr8vt:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-pr8vt\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-5605/agnhost-master-9slcf to jerma-worker\n Normal Pulled 2s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker Started container agnhost-master\n" May 30 21:49:16.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-5605' May 30 21:49:16.729: INFO: stderr: "" May 30 21:49:16.729: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5605\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-9slcf\n" May 30 21:49:16.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-5605' May 30 21:49:16.831: INFO: stderr: "" May 30 21:49:16.831: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5605\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.110.250.16\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.2:6379\nSession Affinity: None\nEvents: \n" May 30 21:49:16.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' May 30 21:49:16.955: INFO: stderr: "" May 30 21:49:16.955: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Sat, 30 May 2020 21:49:15 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 30 May 2020 21:49:02 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 30 May 2020 21:49:02 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 30 May 2020 21:49:02 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 30 May 2020 21:49:02 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 76d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 76d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 76d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 76d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 76d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 76d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 76d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 76d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 76d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 30 21:49:16.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5605' May 30 21:49:17.061: INFO: stderr: "" May 30 21:49:17.061: INFO: stdout: "Name: kubectl-5605\nLabels: e2e-framework=kubectl\n e2e-run=3478e8b2-f39c-4d40-993e-0dbc31ec855d\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:49:17.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5605" for this suite. • [SLOW TEST:7.421 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":125,"skipped":1989,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:49:17.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 30 21:49:17.168: INFO: Waiting up to 5m0s for pod "pod-4d89857a-4c24-407a-acd5-225c30833323" in namespace "emptydir-5955" to be "success or failure" May 30 21:49:17.172: INFO: Pod "pod-4d89857a-4c24-407a-acd5-225c30833323": Phase="Pending", Reason="", readiness=false. Elapsed: 3.804889ms May 30 21:49:19.209: INFO: Pod "pod-4d89857a-4c24-407a-acd5-225c30833323": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040555719s May 30 21:49:21.219: INFO: Pod "pod-4d89857a-4c24-407a-acd5-225c30833323": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050230585s STEP: Saw pod success May 30 21:49:21.219: INFO: Pod "pod-4d89857a-4c24-407a-acd5-225c30833323" satisfied condition "success or failure" May 30 21:49:21.221: INFO: Trying to get logs from node jerma-worker pod pod-4d89857a-4c24-407a-acd5-225c30833323 container test-container: STEP: delete the pod May 30 21:49:21.240: INFO: Waiting for pod pod-4d89857a-4c24-407a-acd5-225c30833323 to disappear May 30 21:49:21.244: INFO: Pod pod-4d89857a-4c24-407a-acd5-225c30833323 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:49:21.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5955" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2015,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:49:21.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 30 21:49:29.452: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 30 21:49:29.457: INFO: Pod pod-with-prestop-exec-hook still exists May 30 21:49:31.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 30 21:49:31.462: INFO: Pod pod-with-prestop-exec-hook still exists May 30 21:49:33.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 30 21:49:33.462: INFO: Pod pod-with-prestop-exec-hook still exists May 30 21:49:35.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 30 21:49:35.462: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:49:35.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6890" for this suite. • [SLOW TEST:14.226 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2077,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:49:35.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:49:39.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6340" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2079,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:49:39.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 30 21:49:39.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-11' May 30 21:49:40.461: INFO: stderr: "" May 30 21:49:40.461: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 30 21:49:40.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-11' May 30 21:49:40.586: INFO: stderr: "" May 30 21:49:40.586: INFO: stdout: "update-demo-nautilus-vb95p update-demo-nautilus-zrj2n " May 30 21:49:40.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vb95p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-11' May 30 21:49:40.684: INFO: stderr: "" May 30 21:49:40.684: INFO: stdout: "" May 30 21:49:40.684: INFO: update-demo-nautilus-vb95p is created but not running May 30 21:49:45.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-11' May 30 21:49:45.783: INFO: stderr: "" May 30 21:49:45.783: INFO: stdout: "update-demo-nautilus-vb95p update-demo-nautilus-zrj2n " May 30 21:49:45.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vb95p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-11' May 30 21:49:45.887: INFO: stderr: "" May 30 21:49:45.887: INFO: stdout: "true" May 30 21:49:45.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vb95p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-11' May 30 21:49:46.008: INFO: stderr: "" May 30 21:49:46.008: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 30 21:49:46.008: INFO: validating pod update-demo-nautilus-vb95p May 30 21:49:46.017: INFO: got data: { "image": "nautilus.jpg" } May 30 21:49:46.017: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 30 21:49:46.017: INFO: update-demo-nautilus-vb95p is verified up and running May 30 21:49:46.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zrj2n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-11' May 30 21:49:46.147: INFO: stderr: "" May 30 21:49:46.147: INFO: stdout: "true" May 30 21:49:46.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zrj2n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-11' May 30 21:49:46.264: INFO: stderr: "" May 30 21:49:46.264: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 30 21:49:46.264: INFO: validating pod update-demo-nautilus-zrj2n May 30 21:49:46.280: INFO: got data: { "image": "nautilus.jpg" } May 30 21:49:46.280: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 30 21:49:46.280: INFO: update-demo-nautilus-zrj2n is verified up and running STEP: scaling down the replication controller May 30 21:49:46.283: INFO: scanned /root for discovery docs: May 30 21:49:46.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-11' May 30 21:49:47.461: INFO: stderr: "" May 30 21:49:47.461: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 30 21:49:47.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-11' May 30 21:49:47.566: INFO: stderr: "" May 30 21:49:47.566: INFO: stdout: "update-demo-nautilus-vb95p update-demo-nautilus-zrj2n " STEP: Replicas for name=update-demo: expected=1 actual=2 May 30 21:49:52.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-11' May 30 21:49:52.659: INFO: stderr: "" May 30 21:49:52.659: INFO: stdout: "update-demo-nautilus-vb95p update-demo-nautilus-zrj2n " STEP: Replicas for name=update-demo: expected=1 actual=2 May 30 21:49:57.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-11' May 30 21:49:57.752: INFO: stderr: "" May 30 21:49:57.752: INFO: stdout: "update-demo-nautilus-vb95p update-demo-nautilus-zrj2n " STEP: Replicas for name=update-demo: expected=1 actual=2 May 30 21:50:02.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-11' May 30 21:50:02.850: INFO: stderr: "" May 30 21:50:02.850: INFO: stdout: "update-demo-nautilus-vb95p " May 30 21:50:02.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vb95p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-11' May 30 21:50:02.940: INFO: stderr: "" May 30 21:50:02.940: INFO: stdout: "true" May 30 21:50:02.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vb95p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-11' May 30 21:50:03.040: INFO: stderr: "" May 30 21:50:03.040: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 30 21:50:03.040: INFO: validating pod update-demo-nautilus-vb95p May 30 21:50:03.044: INFO: got data: { "image": "nautilus.jpg" } May 30 21:50:03.044: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 30 21:50:03.044: INFO: update-demo-nautilus-vb95p is verified up and running STEP: scaling up the replication controller May 30 21:50:03.046: INFO: scanned /root for discovery docs: May 30 21:50:03.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-11' May 30 21:50:04.175: INFO: stderr: "" May 30 21:50:04.175: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 30 21:50:04.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-11' May 30 21:50:04.277: INFO: stderr: "" May 30 21:50:04.277: INFO: stdout: "update-demo-nautilus-lr9rf update-demo-nautilus-vb95p " May 30 21:50:04.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lr9rf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-11' May 30 21:50:04.367: INFO: stderr: "" May 30 21:50:04.367: INFO: stdout: "" May 30 21:50:04.367: INFO: update-demo-nautilus-lr9rf is created but not running May 30 21:50:09.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-11' May 30 21:50:09.470: INFO: stderr: "" May 30 21:50:09.470: INFO: stdout: "update-demo-nautilus-lr9rf update-demo-nautilus-vb95p " May 30 21:50:09.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lr9rf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-11' May 30 21:50:09.567: INFO: stderr: "" May 30 21:50:09.567: INFO: stdout: "true" May 30 21:50:09.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lr9rf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-11' May 30 21:50:09.675: INFO: stderr: "" May 30 21:50:09.675: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 30 21:50:09.675: INFO: validating pod update-demo-nautilus-lr9rf May 30 21:50:09.680: INFO: got data: { "image": "nautilus.jpg" } May 30 21:50:09.680: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 30 21:50:09.680: INFO: update-demo-nautilus-lr9rf is verified up and running May 30 21:50:09.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vb95p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-11' May 30 21:50:09.779: INFO: stderr: "" May 30 21:50:09.779: INFO: stdout: "true" May 30 21:50:09.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vb95p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-11' May 30 21:50:09.882: INFO: stderr: "" May 30 21:50:09.882: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 30 21:50:09.882: INFO: validating pod update-demo-nautilus-vb95p May 30 21:50:09.894: INFO: got data: { "image": "nautilus.jpg" } May 30 21:50:09.894: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 30 21:50:09.894: INFO: update-demo-nautilus-vb95p is verified up and running STEP: using delete to clean up resources May 30 21:50:09.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-11' May 30 21:50:10.005: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 30 21:50:10.005: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 30 21:50:10.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-11' May 30 21:50:10.111: INFO: stderr: "No resources found in kubectl-11 namespace.\n" May 30 21:50:10.111: INFO: stdout: "" May 30 21:50:10.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-11 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 30 21:50:10.216: INFO: stderr: "" May 30 21:50:10.216: INFO: stdout: "update-demo-nautilus-lr9rf\nupdate-demo-nautilus-vb95p\n" May 30 21:50:10.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-11' May 30 21:50:10.807: INFO: stderr: "No resources found in kubectl-11 namespace.\n" May 30 21:50:10.807: INFO: stdout: "" May 30 21:50:10.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-11 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 30 21:50:10.907: INFO: stderr: "" May 30 21:50:10.907: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:50:10.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-11" for this suite. • [SLOW TEST:31.312 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":129,"skipped":2093,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:50:10.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 30 21:50:11.224: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. May 30 21:50:12.036: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 30 21:50:14.390: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472212, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472212, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472212, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472212, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 21:50:16.523: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472212, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472212, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472212, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472212, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 21:50:19.249: INFO: Waited 847.076889ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:50:20.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9655" for this suite. • [SLOW TEST:9.212 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":130,"skipped":2113,"failed":0} SSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:50:20.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-1744 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1744 to expose endpoints map[] May 30 21:50:20.385: INFO: Get endpoints failed (15.940916ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 30 21:50:21.388: INFO: successfully validated that service endpoint-test2 in namespace services-1744 exposes endpoints map[] (1.019122137s elapsed) STEP: Creating pod pod1 in namespace services-1744 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1744 to expose endpoints map[pod1:[80]] May 30 21:50:25.447: INFO: successfully validated that service endpoint-test2 in namespace services-1744 exposes endpoints map[pod1:[80]] (4.053065539s elapsed) STEP: Creating pod pod2 in namespace services-1744 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1744 to expose endpoints map[pod1:[80] pod2:[80]] May 30 21:50:28.688: INFO: successfully validated that service endpoint-test2 in namespace services-1744 exposes endpoints map[pod1:[80] pod2:[80]] (3.236533339s elapsed) STEP: Deleting pod pod1 in namespace services-1744 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1744 to expose endpoints map[pod2:[80]] May 30 21:50:29.706: INFO: successfully validated that service endpoint-test2 in namespace services-1744 exposes endpoints map[pod2:[80]] (1.015362052s elapsed) STEP: Deleting pod pod2 in namespace services-1744 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1744 to expose endpoints map[] May 30 21:50:30.764: INFO: successfully validated that service endpoint-test2 in namespace services-1744 exposes endpoints map[] (1.053388679s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:50:30.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1744" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.889 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":131,"skipped":2119,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:50:31.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 30 21:50:36.184: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:50:36.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6679" for this suite. • [SLOW TEST:5.342 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":132,"skipped":2124,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:50:36.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container May 30 21:50:41.127: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2147 pod-service-account-e5b53ef1-e443-4bcb-a0b7-5b4aa45217ea -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 30 21:50:41.344: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2147 pod-service-account-e5b53ef1-e443-4bcb-a0b7-5b4aa45217ea -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 30 21:50:41.579: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2147 pod-service-account-e5b53ef1-e443-4bcb-a0b7-5b4aa45217ea -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:50:42.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2147" for this suite. • [SLOW TEST:5.902 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":133,"skipped":2149,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:50:42.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 30 21:50:42.647: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7411f97c-5dc8-4c41-a418-2a8700fc12de" in namespace "downward-api-2427" to be "success or failure" May 30 21:50:42.703: INFO: Pod "downwardapi-volume-7411f97c-5dc8-4c41-a418-2a8700fc12de": Phase="Pending", Reason="", readiness=false. Elapsed: 56.720268ms May 30 21:50:44.708: INFO: Pod "downwardapi-volume-7411f97c-5dc8-4c41-a418-2a8700fc12de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061055463s May 30 21:50:46.712: INFO: Pod "downwardapi-volume-7411f97c-5dc8-4c41-a418-2a8700fc12de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065323721s STEP: Saw pod success May 30 21:50:46.712: INFO: Pod "downwardapi-volume-7411f97c-5dc8-4c41-a418-2a8700fc12de" satisfied condition "success or failure" May 30 21:50:46.715: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-7411f97c-5dc8-4c41-a418-2a8700fc12de container client-container: STEP: delete the pod May 30 21:50:46.749: INFO: Waiting for pod downwardapi-volume-7411f97c-5dc8-4c41-a418-2a8700fc12de to disappear May 30 21:50:46.772: INFO: Pod downwardapi-volume-7411f97c-5dc8-4c41-a418-2a8700fc12de no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:50:46.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2427" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2159,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:50:46.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 30 21:50:46.849: INFO: Waiting up to 5m0s for pod "downward-api-9cd22152-8d25-4c36-a70c-48f98b2cf573" in namespace "downward-api-1533" to be "success or failure" May 30 21:50:46.855: INFO: Pod "downward-api-9cd22152-8d25-4c36-a70c-48f98b2cf573": Phase="Pending", Reason="", readiness=false. Elapsed: 5.536097ms May 30 21:50:48.898: INFO: Pod "downward-api-9cd22152-8d25-4c36-a70c-48f98b2cf573": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048804254s May 30 21:50:50.911: INFO: Pod "downward-api-9cd22152-8d25-4c36-a70c-48f98b2cf573": Phase="Running", Reason="", readiness=true. Elapsed: 4.061256006s May 30 21:50:52.914: INFO: Pod "downward-api-9cd22152-8d25-4c36-a70c-48f98b2cf573": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064952469s STEP: Saw pod success May 30 21:50:52.914: INFO: Pod "downward-api-9cd22152-8d25-4c36-a70c-48f98b2cf573" satisfied condition "success or failure" May 30 21:50:52.918: INFO: Trying to get logs from node jerma-worker2 pod downward-api-9cd22152-8d25-4c36-a70c-48f98b2cf573 container dapi-container: STEP: delete the pod May 30 21:50:52.953: INFO: Waiting for pod downward-api-9cd22152-8d25-4c36-a70c-48f98b2cf573 to disappear May 30 21:50:53.014: INFO: Pod downward-api-9cd22152-8d25-4c36-a70c-48f98b2cf573 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:50:53.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1533" for this suite. • [SLOW TEST:6.244 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2182,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:50:53.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition May 30 21:50:53.131: INFO: Waiting up to 5m0s for pod "var-expansion-5d1cbf2c-66a4-4b07-9ab1-e1ccdf19cfcd" in namespace "var-expansion-7530" to be "success or failure" May 30 21:50:53.143: INFO: Pod "var-expansion-5d1cbf2c-66a4-4b07-9ab1-e1ccdf19cfcd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.65468ms May 30 21:50:55.147: INFO: Pod "var-expansion-5d1cbf2c-66a4-4b07-9ab1-e1ccdf19cfcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016461694s May 30 21:50:57.151: INFO: Pod "var-expansion-5d1cbf2c-66a4-4b07-9ab1-e1ccdf19cfcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020715287s STEP: Saw pod success May 30 21:50:57.151: INFO: Pod "var-expansion-5d1cbf2c-66a4-4b07-9ab1-e1ccdf19cfcd" satisfied condition "success or failure" May 30 21:50:57.155: INFO: Trying to get logs from node jerma-worker pod var-expansion-5d1cbf2c-66a4-4b07-9ab1-e1ccdf19cfcd container dapi-container: STEP: delete the pod May 30 21:50:57.184: INFO: Waiting for pod var-expansion-5d1cbf2c-66a4-4b07-9ab1-e1ccdf19cfcd to disappear May 30 21:50:57.188: INFO: Pod var-expansion-5d1cbf2c-66a4-4b07-9ab1-e1ccdf19cfcd no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:50:57.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7530" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2193,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:50:57.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin May 30 21:50:57.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7291 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 30 21:51:01.094: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0530 21:51:01.032560 3080 log.go:172] (0xc0003d6b00) (0xc0005ec1e0) Create stream\nI0530 21:51:01.032627 3080 log.go:172] (0xc0003d6b00) (0xc0005ec1e0) Stream added, broadcasting: 1\nI0530 21:51:01.035888 3080 log.go:172] (0xc0003d6b00) Reply frame received for 1\nI0530 21:51:01.035951 3080 log.go:172] (0xc0003d6b00) (0xc0005ec280) Create stream\nI0530 21:51:01.035991 3080 log.go:172] (0xc0003d6b00) (0xc0005ec280) Stream added, broadcasting: 3\nI0530 21:51:01.036910 3080 log.go:172] (0xc0003d6b00) Reply frame received for 3\nI0530 21:51:01.036960 3080 log.go:172] (0xc0003d6b00) (0xc0005ec320) Create stream\nI0530 21:51:01.036977 3080 log.go:172] (0xc0003d6b00) (0xc0005ec320) Stream added, broadcasting: 5\nI0530 21:51:01.038102 3080 log.go:172] (0xc0003d6b00) Reply frame received for 5\nI0530 21:51:01.038143 3080 log.go:172] (0xc0003d6b00) (0xc0005ec3c0) Create stream\nI0530 21:51:01.038159 3080 log.go:172] (0xc0003d6b00) (0xc0005ec3c0) Stream added, broadcasting: 7\nI0530 21:51:01.039072 3080 log.go:172] (0xc0003d6b00) Reply frame received for 7\nI0530 21:51:01.039326 3080 log.go:172] (0xc0005ec280) (3) Writing data frame\nI0530 21:51:01.039524 3080 log.go:172] (0xc0005ec280) (3) Writing data frame\nI0530 21:51:01.040467 3080 log.go:172] (0xc0003d6b00) Data frame received for 5\nI0530 21:51:01.040497 3080 log.go:172] (0xc0005ec320) (5) Data frame handling\nI0530 21:51:01.040522 3080 log.go:172] (0xc0005ec320) (5) Data frame sent\nI0530 21:51:01.041099 3080 log.go:172] (0xc0003d6b00) Data frame received for 5\nI0530 21:51:01.041337 3080 log.go:172] (0xc0005ec320) (5) Data frame handling\nI0530 21:51:01.041367 3080 log.go:172] (0xc0005ec320) (5) Data frame sent\nI0530 21:51:01.071582 3080 log.go:172] (0xc0003d6b00) Data frame received for 5\nI0530 21:51:01.071617 3080 log.go:172] (0xc0005ec320) (5) Data frame handling\nI0530 21:51:01.071944 3080 log.go:172] (0xc0003d6b00) Data frame received for 7\nI0530 21:51:01.071982 3080 log.go:172] (0xc0005ec3c0) (7) Data frame handling\nI0530 21:51:01.072328 3080 log.go:172] (0xc0003d6b00) Data frame received for 1\nI0530 21:51:01.072362 3080 log.go:172] (0xc0005ec1e0) (1) Data frame handling\nI0530 21:51:01.072397 3080 log.go:172] (0xc0005ec1e0) (1) Data frame sent\nI0530 21:51:01.072428 3080 log.go:172] (0xc0003d6b00) (0xc0005ec1e0) Stream removed, broadcasting: 1\nI0530 21:51:01.072651 3080 log.go:172] (0xc0003d6b00) (0xc0005ec280) Stream removed, broadcasting: 3\nI0530 21:51:01.072794 3080 log.go:172] (0xc0003d6b00) Go away received\nI0530 21:51:01.072960 3080 log.go:172] (0xc0003d6b00) (0xc0005ec1e0) Stream removed, broadcasting: 1\nI0530 21:51:01.072995 3080 log.go:172] (0xc0003d6b00) (0xc0005ec280) Stream removed, broadcasting: 3\nI0530 21:51:01.073018 3080 log.go:172] (0xc0003d6b00) (0xc0005ec320) Stream removed, broadcasting: 5\nI0530 21:51:01.073055 3080 log.go:172] (0xc0003d6b00) (0xc0005ec3c0) Stream removed, broadcasting: 7\n" May 30 21:51:01.095: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:51:03.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7291" for this suite. • [SLOW TEST:5.916 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":137,"skipped":2212,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:51:03.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:51:03.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9735" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2255,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:51:03.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 30 21:51:03.311: INFO: >>> kubeConfig: /root/.kube/config May 30 21:51:05.821: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:51:16.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4804" for this suite. • [SLOW TEST:13.153 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":139,"skipped":2279,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:51:16.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 30 21:51:16.531: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4305 /api/v1/namespaces/watch-4305/configmaps/e2e-watch-test-label-changed 5d0f5d6c-d560-405d-822d-5e6857a9c971 20437560 0 2020-05-30 21:51:16 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 30 21:51:16.531: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4305 /api/v1/namespaces/watch-4305/configmaps/e2e-watch-test-label-changed 5d0f5d6c-d560-405d-822d-5e6857a9c971 20437561 0 2020-05-30 21:51:16 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 30 21:51:16.531: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4305 /api/v1/namespaces/watch-4305/configmaps/e2e-watch-test-label-changed 5d0f5d6c-d560-405d-822d-5e6857a9c971 20437562 0 2020-05-30 21:51:16 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 30 21:51:26.563: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4305 /api/v1/namespaces/watch-4305/configmaps/e2e-watch-test-label-changed 5d0f5d6c-d560-405d-822d-5e6857a9c971 20437598 0 2020-05-30 21:51:16 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 30 21:51:26.563: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4305 /api/v1/namespaces/watch-4305/configmaps/e2e-watch-test-label-changed 5d0f5d6c-d560-405d-822d-5e6857a9c971 20437599 0 2020-05-30 21:51:16 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 30 21:51:26.564: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4305 /api/v1/namespaces/watch-4305/configmaps/e2e-watch-test-label-changed 5d0f5d6c-d560-405d-822d-5e6857a9c971 20437600 0 2020-05-30 21:51:16 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:51:26.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4305" for this suite. • [SLOW TEST:10.168 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":140,"skipped":2317,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:51:26.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 30 21:51:31.715: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:51:31.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9450" for this suite. • [SLOW TEST:5.169 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2359,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:51:31.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-cbae71f4-565d-4828-8c96-f12066f4063f in namespace container-probe-5687 May 30 21:51:35.835: INFO: Started pod busybox-cbae71f4-565d-4828-8c96-f12066f4063f in namespace container-probe-5687 STEP: checking the pod's current state and verifying that restartCount is present May 30 21:51:35.839: INFO: Initial restart count of pod busybox-cbae71f4-565d-4828-8c96-f12066f4063f is 0 May 30 21:52:31.998: INFO: Restart count of pod container-probe-5687/busybox-cbae71f4-565d-4828-8c96-f12066f4063f is now 1 (56.158663016s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:52:32.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5687" for this suite. • [SLOW TEST:60.330 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2383,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:52:32.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 30 21:52:32.172: INFO: Waiting up to 5m0s for pod "pod-8939065d-b289-4be7-b2e8-5c177a20e6d2" in namespace "emptydir-3746" to be "success or failure" May 30 21:52:32.180: INFO: Pod "pod-8939065d-b289-4be7-b2e8-5c177a20e6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.193303ms May 30 21:52:34.184: INFO: Pod "pod-8939065d-b289-4be7-b2e8-5c177a20e6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01233862s May 30 21:52:36.190: INFO: Pod "pod-8939065d-b289-4be7-b2e8-5c177a20e6d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017589151s STEP: Saw pod success May 30 21:52:36.190: INFO: Pod "pod-8939065d-b289-4be7-b2e8-5c177a20e6d2" satisfied condition "success or failure" May 30 21:52:36.192: INFO: Trying to get logs from node jerma-worker2 pod pod-8939065d-b289-4be7-b2e8-5c177a20e6d2 container test-container: STEP: delete the pod May 30 21:52:36.219: INFO: Waiting for pod pod-8939065d-b289-4be7-b2e8-5c177a20e6d2 to disappear May 30 21:52:36.223: INFO: Pod pod-8939065d-b289-4be7-b2e8-5c177a20e6d2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:52:36.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3746" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2397,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:52:36.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-ce74fbbf-ddd0-4ea3-b799-ab5ce3528271 STEP: Creating a pod to test consume configMaps May 30 21:52:36.573: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ffd157fc-8621-4e08-8351-6ffce2211cd2" in namespace "projected-3004" to be "success or failure" May 30 21:52:36.655: INFO: Pod "pod-projected-configmaps-ffd157fc-8621-4e08-8351-6ffce2211cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 82.026672ms May 30 21:52:38.659: INFO: Pod "pod-projected-configmaps-ffd157fc-8621-4e08-8351-6ffce2211cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086059309s May 30 21:52:40.664: INFO: Pod "pod-projected-configmaps-ffd157fc-8621-4e08-8351-6ffce2211cd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091069581s STEP: Saw pod success May 30 21:52:40.664: INFO: Pod "pod-projected-configmaps-ffd157fc-8621-4e08-8351-6ffce2211cd2" satisfied condition "success or failure" May 30 21:52:40.667: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-ffd157fc-8621-4e08-8351-6ffce2211cd2 container projected-configmap-volume-test: STEP: delete the pod May 30 21:52:40.712: INFO: Waiting for pod pod-projected-configmaps-ffd157fc-8621-4e08-8351-6ffce2211cd2 to disappear May 30 21:52:40.810: INFO: Pod pod-projected-configmaps-ffd157fc-8621-4e08-8351-6ffce2211cd2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:52:40.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3004" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2401,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:52:40.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-fa421ffe-a514-4524-916d-1579e61d152e STEP: Creating a pod to test consume configMaps May 30 21:52:41.006: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-15ff6a8a-bbde-424d-9086-5133bc5a0333" in namespace "projected-4718" to be "success or failure" May 30 21:52:41.014: INFO: Pod "pod-projected-configmaps-15ff6a8a-bbde-424d-9086-5133bc5a0333": Phase="Pending", Reason="", readiness=false. Elapsed: 8.708699ms May 30 21:52:43.018: INFO: Pod "pod-projected-configmaps-15ff6a8a-bbde-424d-9086-5133bc5a0333": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011945379s May 30 21:52:45.021: INFO: Pod "pod-projected-configmaps-15ff6a8a-bbde-424d-9086-5133bc5a0333": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015074725s STEP: Saw pod success May 30 21:52:45.021: INFO: Pod "pod-projected-configmaps-15ff6a8a-bbde-424d-9086-5133bc5a0333" satisfied condition "success or failure" May 30 21:52:45.023: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-15ff6a8a-bbde-424d-9086-5133bc5a0333 container projected-configmap-volume-test: STEP: delete the pod May 30 21:52:45.075: INFO: Waiting for pod pod-projected-configmaps-15ff6a8a-bbde-424d-9086-5133bc5a0333 to disappear May 30 21:52:45.116: INFO: Pod pod-projected-configmaps-15ff6a8a-bbde-424d-9086-5133bc5a0333 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:52:45.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4718" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2506,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:52:45.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 30 21:52:45.201: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa0e1bc1-afc6-41f8-8076-40eb614e8d56" in namespace "downward-api-4040" to be "success or failure" May 30 21:52:45.210: INFO: Pod "downwardapi-volume-fa0e1bc1-afc6-41f8-8076-40eb614e8d56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.899849ms May 30 21:52:47.236: INFO: Pod "downwardapi-volume-fa0e1bc1-afc6-41f8-8076-40eb614e8d56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034414576s May 30 21:52:49.239: INFO: Pod "downwardapi-volume-fa0e1bc1-afc6-41f8-8076-40eb614e8d56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037648429s STEP: Saw pod success May 30 21:52:49.239: INFO: Pod "downwardapi-volume-fa0e1bc1-afc6-41f8-8076-40eb614e8d56" satisfied condition "success or failure" May 30 21:52:49.241: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-fa0e1bc1-afc6-41f8-8076-40eb614e8d56 container client-container: STEP: delete the pod May 30 21:52:49.534: INFO: Waiting for pod downwardapi-volume-fa0e1bc1-afc6-41f8-8076-40eb614e8d56 to disappear May 30 21:52:49.619: INFO: Pod downwardapi-volume-fa0e1bc1-afc6-41f8-8076-40eb614e8d56 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:52:49.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4040" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2519,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:52:49.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-4227f9f3-6e21-4646-a330-d33680fa923d STEP: Creating a pod to test consume configMaps May 30 21:52:49.726: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-45a6694b-cdc5-4170-a75f-8960341a0e51" in namespace "projected-2632" to be "success or failure" May 30 21:52:49.750: INFO: Pod "pod-projected-configmaps-45a6694b-cdc5-4170-a75f-8960341a0e51": Phase="Pending", Reason="", readiness=false. Elapsed: 23.524211ms May 30 21:52:51.907: INFO: Pod "pod-projected-configmaps-45a6694b-cdc5-4170-a75f-8960341a0e51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180056965s May 30 21:52:53.911: INFO: Pod "pod-projected-configmaps-45a6694b-cdc5-4170-a75f-8960341a0e51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.184584542s STEP: Saw pod success May 30 21:52:53.911: INFO: Pod "pod-projected-configmaps-45a6694b-cdc5-4170-a75f-8960341a0e51" satisfied condition "success or failure" May 30 21:52:53.914: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-45a6694b-cdc5-4170-a75f-8960341a0e51 container projected-configmap-volume-test: STEP: delete the pod May 30 21:52:53.952: INFO: Waiting for pod pod-projected-configmaps-45a6694b-cdc5-4170-a75f-8960341a0e51 to disappear May 30 21:52:53.960: INFO: Pod pod-projected-configmaps-45a6694b-cdc5-4170-a75f-8960341a0e51 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:52:53.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2632" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2582,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:52:53.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all May 30 21:52:54.064: INFO: Waiting up to 5m0s for pod "client-containers-05d83a13-bb9d-48ed-acd4-f9bc94339cd8" in namespace "containers-3383" to be "success or failure" May 30 21:52:54.068: INFO: Pod "client-containers-05d83a13-bb9d-48ed-acd4-f9bc94339cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234677ms May 30 21:52:56.116: INFO: Pod "client-containers-05d83a13-bb9d-48ed-acd4-f9bc94339cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051597989s May 30 21:52:58.120: INFO: Pod "client-containers-05d83a13-bb9d-48ed-acd4-f9bc94339cd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056152629s STEP: Saw pod success May 30 21:52:58.120: INFO: Pod "client-containers-05d83a13-bb9d-48ed-acd4-f9bc94339cd8" satisfied condition "success or failure" May 30 21:52:58.124: INFO: Trying to get logs from node jerma-worker pod client-containers-05d83a13-bb9d-48ed-acd4-f9bc94339cd8 container test-container: STEP: delete the pod May 30 21:52:58.142: INFO: Waiting for pod client-containers-05d83a13-bb9d-48ed-acd4-f9bc94339cd8 to disappear May 30 21:52:58.146: INFO: Pod client-containers-05d83a13-bb9d-48ed-acd4-f9bc94339cd8 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:52:58.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3383" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2605,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:52:58.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode May 30 21:52:58.223: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1303" to be "success or failure" May 30 21:52:58.259: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 36.039447ms May 30 21:53:00.264: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040302602s May 30 21:53:02.302: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078382533s May 30 21:53:04.306: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.082417017s STEP: Saw pod success May 30 21:53:04.306: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 30 21:53:04.309: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 30 21:53:04.355: INFO: Waiting for pod pod-host-path-test to disappear May 30 21:53:04.372: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:53:04.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1303" for this suite. • [SLOW TEST:6.226 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2630,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:53:04.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-5a2773f1-a05b-465e-aa3f-a5c826270f5e STEP: Creating secret with name secret-projected-all-test-volume-8f86d728-b04b-42f6-a511-1f4909846805 STEP: Creating a pod to test Check all projections for projected volume plugin May 30 21:53:04.602: INFO: Waiting up to 5m0s for pod "projected-volume-7b7c4718-65cc-4212-95a7-f39138bc9ca1" in namespace "projected-6379" to be "success or failure" May 30 21:53:04.635: INFO: Pod "projected-volume-7b7c4718-65cc-4212-95a7-f39138bc9ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 32.627717ms May 30 21:53:06.673: INFO: Pod "projected-volume-7b7c4718-65cc-4212-95a7-f39138bc9ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070527452s May 30 21:53:08.677: INFO: Pod "projected-volume-7b7c4718-65cc-4212-95a7-f39138bc9ca1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074668524s STEP: Saw pod success May 30 21:53:08.677: INFO: Pod "projected-volume-7b7c4718-65cc-4212-95a7-f39138bc9ca1" satisfied condition "success or failure" May 30 21:53:08.680: INFO: Trying to get logs from node jerma-worker pod projected-volume-7b7c4718-65cc-4212-95a7-f39138bc9ca1 container projected-all-volume-test: STEP: delete the pod May 30 21:53:08.753: INFO: Waiting for pod projected-volume-7b7c4718-65cc-4212-95a7-f39138bc9ca1 to disappear May 30 21:53:08.756: INFO: Pod projected-volume-7b7c4718-65cc-4212-95a7-f39138bc9ca1 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:53:08.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6379" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2640,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:53:08.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 21:53:09.619: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 21:53:11.629: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472389, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472389, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472389, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472389, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 21:53:14.667: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:53:15.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9962" for this suite. STEP: Destroying namespace "webhook-9962-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.627 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":151,"skipped":2657,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:53:15.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:54:15.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5966" for this suite. • [SLOW TEST:60.086 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2669,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:54:15.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-17745298-58f7-4101-8ed5-8f2bd4243077 STEP: Creating a pod to test consume secrets May 30 21:54:15.570: INFO: Waiting up to 5m0s for pod "pod-secrets-113f514a-a84f-43bd-9bfb-b47850072d5e" in namespace "secrets-7790" to be "success or failure" May 30 21:54:15.574: INFO: Pod "pod-secrets-113f514a-a84f-43bd-9bfb-b47850072d5e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.769253ms May 30 21:54:17.614: INFO: Pod "pod-secrets-113f514a-a84f-43bd-9bfb-b47850072d5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044467581s May 30 21:54:19.668: INFO: Pod "pod-secrets-113f514a-a84f-43bd-9bfb-b47850072d5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098227253s STEP: Saw pod success May 30 21:54:19.668: INFO: Pod "pod-secrets-113f514a-a84f-43bd-9bfb-b47850072d5e" satisfied condition "success or failure" May 30 21:54:19.671: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-113f514a-a84f-43bd-9bfb-b47850072d5e container secret-volume-test: STEP: delete the pod May 30 21:54:19.699: INFO: Waiting for pod pod-secrets-113f514a-a84f-43bd-9bfb-b47850072d5e to disappear May 30 21:54:19.834: INFO: Pod pod-secrets-113f514a-a84f-43bd-9bfb-b47850072d5e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:54:19.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7790" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2700,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:54:19.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-e7b2923b-3119-4c16-9b85-75cb73dfb891 STEP: Creating secret with name s-test-opt-upd-8e571986-7a8a-4359-9126-fbabf31935cb STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e7b2923b-3119-4c16-9b85-75cb73dfb891 STEP: Updating secret s-test-opt-upd-8e571986-7a8a-4359-9126-fbabf31935cb STEP: Creating secret with name s-test-opt-create-6f5b2314-8544-42b8-bb57-53fabc99d7d6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:54:28.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3124" for this suite. • [SLOW TEST:8.230 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2707,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:54:28.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 30 21:54:32.697: INFO: Successfully updated pod "labelsupdatee3add3b5-95d0-4b4a-9ab4-4f5ee3066bbe" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:54:36.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2017" for this suite. • [SLOW TEST:8.681 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2726,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:54:36.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:54:36.801: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:54:37.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4895" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":156,"skipped":2762,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:54:37.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:54:37.513: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 30 21:54:42.568: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 30 21:54:42.569: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 30 21:54:42.652: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-1786 /apis/apps/v1/namespaces/deployment-1786/deployments/test-cleanup-deployment ffbac6fb-49c4-4294-a8ab-d79773d0a7ab 20438680 1 2020-05-30 21:54:42 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001062128 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 30 21:54:42.947: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-1786 /apis/apps/v1/namespaces/deployment-1786/replicasets/test-cleanup-deployment-55ffc6b7b6 40daa3a4-5eb9-4b3f-9fd1-57247acd93f3 20438687 1 2020-05-30 21:54:42 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment ffbac6fb-49c4-4294-a8ab-d79773d0a7ab 0xc001062547 0xc001062548}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0010625b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 30 21:54:42.947: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 30 21:54:42.947: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-1786 /apis/apps/v1/namespaces/deployment-1786/replicasets/test-cleanup-controller 0e559507-8b95-4e67-890c-83675f749678 20438681 1 2020-05-30 21:54:37 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment ffbac6fb-49c4-4294-a8ab-d79773d0a7ab 0xc00106244f 0xc001062460}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0010624d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 30 21:54:43.106: INFO: Pod "test-cleanup-controller-s9ztq" is available: &Pod{ObjectMeta:{test-cleanup-controller-s9ztq test-cleanup-controller- deployment-1786 /api/v1/namespaces/deployment-1786/pods/test-cleanup-controller-s9ztq f160b7d3-26b8-4586-aa4e-6d366320b14f 20438660 0 2020-05-30 21:54:37 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 0e559507-8b95-4e67-890c-83675f749678 0xc003a3b777 0xc003a3b778}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7pmhs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7pmhs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7pmhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 21:54:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 21:54:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 21:54:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 21:54:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.19,StartTime:2020-05-30 21:54:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 21:54:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://01ec67cad000c04c6653b8a1f833dcb624d95d55bcbc9165f1c4d47f0faffb57,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.19,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 21:54:43.106: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-ttl5q" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-ttl5q test-cleanup-deployment-55ffc6b7b6- deployment-1786 /api/v1/namespaces/deployment-1786/pods/test-cleanup-deployment-55ffc6b7b6-ttl5q 255ffee8-720d-41f4-8086-8073495db6bf 20438692 0 2020-05-30 21:54:42 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 40daa3a4-5eb9-4b3f-9fd1-57247acd93f3 0xc003a3b907 0xc003a3b908}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7pmhs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7pmhs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7pmhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 21:54:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 21:54:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [agnhost],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 21:54:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [agnhost],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 21:54:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-30 21:54:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:54:43.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1786" for this suite. • [SLOW TEST:5.705 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":157,"skipped":2776,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:54:43.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args May 30 21:54:43.180: INFO: Waiting up to 5m0s for pod "var-expansion-fdb7769a-4bbb-4300-aef3-fffb58793be7" in namespace "var-expansion-348" to be "success or failure" May 30 21:54:43.263: INFO: Pod "var-expansion-fdb7769a-4bbb-4300-aef3-fffb58793be7": Phase="Pending", Reason="", readiness=false. Elapsed: 82.714661ms May 30 21:54:45.267: INFO: Pod "var-expansion-fdb7769a-4bbb-4300-aef3-fffb58793be7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086640349s May 30 21:54:47.269: INFO: Pod "var-expansion-fdb7769a-4bbb-4300-aef3-fffb58793be7": Phase="Running", Reason="", readiness=true. Elapsed: 4.088990647s May 30 21:54:49.273: INFO: Pod "var-expansion-fdb7769a-4bbb-4300-aef3-fffb58793be7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.093284471s STEP: Saw pod success May 30 21:54:49.273: INFO: Pod "var-expansion-fdb7769a-4bbb-4300-aef3-fffb58793be7" satisfied condition "success or failure" May 30 21:54:49.276: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-fdb7769a-4bbb-4300-aef3-fffb58793be7 container dapi-container: STEP: delete the pod May 30 21:54:49.316: INFO: Waiting for pod var-expansion-fdb7769a-4bbb-4300-aef3-fffb58793be7 to disappear May 30 21:54:49.323: INFO: Pod var-expansion-fdb7769a-4bbb-4300-aef3-fffb58793be7 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:54:49.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-348" for this suite. • [SLOW TEST:6.201 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2795,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:54:49.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:54:49.397: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:54:50.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3832" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":159,"skipped":2799,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:54:50.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 30 21:54:50.640: INFO: >>> kubeConfig: /root/.kube/config May 30 21:54:52.612: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:55:04.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5565" for this suite. • [SLOW TEST:13.525 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":160,"skipped":2802,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:55:04.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-542.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-542.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-542.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-542.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 30 21:55:10.195: INFO: DNS probes using dns-test-45658ed1-b380-41f9-9c44-d354599bc328 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-542.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-542.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-542.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-542.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 30 21:55:18.320: INFO: File wheezy_udp@dns-test-service-3.dns-542.svc.cluster.local from pod dns-542/dns-test-6ef9240e-1192-41f0-98fa-49f05b198fba contains 'foo.example.com. ' instead of 'bar.example.com.' May 30 21:55:18.323: INFO: File jessie_udp@dns-test-service-3.dns-542.svc.cluster.local from pod dns-542/dns-test-6ef9240e-1192-41f0-98fa-49f05b198fba contains 'foo.example.com. ' instead of 'bar.example.com.' May 30 21:55:18.323: INFO: Lookups using dns-542/dns-test-6ef9240e-1192-41f0-98fa-49f05b198fba failed for: [wheezy_udp@dns-test-service-3.dns-542.svc.cluster.local jessie_udp@dns-test-service-3.dns-542.svc.cluster.local] May 30 21:55:23.329: INFO: File wheezy_udp@dns-test-service-3.dns-542.svc.cluster.local from pod dns-542/dns-test-6ef9240e-1192-41f0-98fa-49f05b198fba contains 'foo.example.com. ' instead of 'bar.example.com.' May 30 21:55:23.333: INFO: File jessie_udp@dns-test-service-3.dns-542.svc.cluster.local from pod dns-542/dns-test-6ef9240e-1192-41f0-98fa-49f05b198fba contains 'foo.example.com. ' instead of 'bar.example.com.' May 30 21:55:23.333: INFO: Lookups using dns-542/dns-test-6ef9240e-1192-41f0-98fa-49f05b198fba failed for: [wheezy_udp@dns-test-service-3.dns-542.svc.cluster.local jessie_udp@dns-test-service-3.dns-542.svc.cluster.local] May 30 21:55:28.328: INFO: File wheezy_udp@dns-test-service-3.dns-542.svc.cluster.local from pod dns-542/dns-test-6ef9240e-1192-41f0-98fa-49f05b198fba contains 'foo.example.com. ' instead of 'bar.example.com.' May 30 21:55:28.333: INFO: File jessie_udp@dns-test-service-3.dns-542.svc.cluster.local from pod dns-542/dns-test-6ef9240e-1192-41f0-98fa-49f05b198fba contains 'foo.example.com. ' instead of 'bar.example.com.' May 30 21:55:28.333: INFO: Lookups using dns-542/dns-test-6ef9240e-1192-41f0-98fa-49f05b198fba failed for: [wheezy_udp@dns-test-service-3.dns-542.svc.cluster.local jessie_udp@dns-test-service-3.dns-542.svc.cluster.local] May 30 21:55:33.328: INFO: File wheezy_udp@dns-test-service-3.dns-542.svc.cluster.local from pod dns-542/dns-test-6ef9240e-1192-41f0-98fa-49f05b198fba contains 'foo.example.com. ' instead of 'bar.example.com.' May 30 21:55:33.332: INFO: File jessie_udp@dns-test-service-3.dns-542.svc.cluster.local from pod dns-542/dns-test-6ef9240e-1192-41f0-98fa-49f05b198fba contains 'foo.example.com. ' instead of 'bar.example.com.' May 30 21:55:33.332: INFO: Lookups using dns-542/dns-test-6ef9240e-1192-41f0-98fa-49f05b198fba failed for: [wheezy_udp@dns-test-service-3.dns-542.svc.cluster.local jessie_udp@dns-test-service-3.dns-542.svc.cluster.local] May 30 21:55:38.328: INFO: File wheezy_udp@dns-test-service-3.dns-542.svc.cluster.local from pod dns-542/dns-test-6ef9240e-1192-41f0-98fa-49f05b198fba contains 'foo.example.com. ' instead of 'bar.example.com.' May 30 21:55:38.332: INFO: File jessie_udp@dns-test-service-3.dns-542.svc.cluster.local from pod dns-542/dns-test-6ef9240e-1192-41f0-98fa-49f05b198fba contains 'foo.example.com. ' instead of 'bar.example.com.' May 30 21:55:38.332: INFO: Lookups using dns-542/dns-test-6ef9240e-1192-41f0-98fa-49f05b198fba failed for: [wheezy_udp@dns-test-service-3.dns-542.svc.cluster.local jessie_udp@dns-test-service-3.dns-542.svc.cluster.local] May 30 21:55:43.334: INFO: DNS probes using dns-test-6ef9240e-1192-41f0-98fa-49f05b198fba succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-542.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-542.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-542.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-542.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 30 21:55:50.103: INFO: DNS probes using dns-test-31713b1e-4a79-4f4d-9efb-1d292aea5ad8 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:55:50.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-542" for this suite. • [SLOW TEST:46.433 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":161,"skipped":2815,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:55:50.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:55:50.821: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 30 21:55:56.126: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 30 21:55:56.126: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 30 21:55:58.130: INFO: Creating deployment "test-rollover-deployment" May 30 21:55:58.166: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 30 21:56:00.185: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 30 21:56:00.191: INFO: Ensure that both replica sets have 1 created replica May 30 21:56:00.196: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 30 21:56:00.203: INFO: Updating deployment test-rollover-deployment May 30 21:56:00.203: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 30 21:56:02.228: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 30 21:56:02.235: INFO: Make sure deployment "test-rollover-deployment" is complete May 30 21:56:02.242: INFO: all replica sets need to contain the pod-template-hash label May 30 21:56:02.242: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472558, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472558, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472560, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472558, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 21:56:04.251: INFO: all replica sets need to contain the pod-template-hash label May 30 21:56:04.251: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472558, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472558, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472563, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472558, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 21:56:06.251: INFO: all replica sets need to contain the pod-template-hash label May 30 21:56:06.251: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472558, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472558, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472563, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472558, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 21:56:08.250: INFO: all replica sets need to contain the pod-template-hash label May 30 21:56:08.250: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472558, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472558, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472563, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472558, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 21:56:10.250: INFO: all replica sets need to contain the pod-template-hash label May 30 21:56:10.250: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472558, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472558, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472563, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472558, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 21:56:12.254: INFO: all replica sets need to contain the pod-template-hash label May 30 21:56:12.254: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472558, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472558, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472563, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472558, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 21:56:14.249: INFO: May 30 21:56:14.249: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 30 21:56:14.257: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4855 /apis/apps/v1/namespaces/deployment-4855/deployments/test-rollover-deployment e05a3b5c-87db-425c-8290-e6755e48405b 20439253 2 2020-05-30 21:55:58 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cdc418 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-30 21:55:58 +0000 UTC,LastTransitionTime:2020-05-30 21:55:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-05-30 21:56:13 +0000 UTC,LastTransitionTime:2020-05-30 21:55:58 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 30 21:56:14.260: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-4855 /apis/apps/v1/namespaces/deployment-4855/replicasets/test-rollover-deployment-574d6dfbff 55e7aa68-5dda-439a-bb8b-4d235129cd06 20439242 2 2020-05-30 21:56:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment e05a3b5c-87db-425c-8290-e6755e48405b 0xc0007f6a57 0xc0007f6a58}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0007f6dd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 30 21:56:14.260: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 30 21:56:14.260: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4855 /apis/apps/v1/namespaces/deployment-4855/replicasets/test-rollover-controller c959fd1d-abf4-4e2a-af7f-03fa29e94493 20439251 2 2020-05-30 21:55:50 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment e05a3b5c-87db-425c-8290-e6755e48405b 0xc0007f667f 0xc0007f6710}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0007f6938 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 30 21:56:14.261: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-4855 /apis/apps/v1/namespaces/deployment-4855/replicasets/test-rollover-deployment-f6c94f66c 6b5d02f4-e7f6-4bbc-9780-5f6542a26f7f 20439196 2 2020-05-30 21:55:58 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment e05a3b5c-87db-425c-8290-e6755e48405b 0xc0007f6f10 0xc0007f6f11}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0007f71c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 30 21:56:14.263: INFO: Pod "test-rollover-deployment-574d6dfbff-n5cfg" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-n5cfg test-rollover-deployment-574d6dfbff- deployment-4855 /api/v1/namespaces/deployment-4855/pods/test-rollover-deployment-574d6dfbff-n5cfg 7e063f97-04c2-4dc4-80d4-e810f5619845 20439210 0 2020-05-30 21:56:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 55e7aa68-5dda-439a-bb8b-4d235129cd06 0xc0058dfb87 0xc0058dfb88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghxn5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghxn5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghxn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 21:56:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 21:56:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 21:56:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 21:56:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.25,StartTime:2020-05-30 21:56:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 21:56:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://8660a61fd8a724aeaf001919ee2743c0173a2990a9910c6f880a3db865e0ca89,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.25,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:56:14.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4855" for this suite. • [SLOW TEST:23.731 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":162,"skipped":2824,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:56:14.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 30 21:56:14.516: INFO: Waiting up to 5m0s for pod "pod-cc9f4823-6af6-4aa4-98fe-d85df9c05786" in namespace "emptydir-6134" to be "success or failure" May 30 21:56:14.519: INFO: Pod "pod-cc9f4823-6af6-4aa4-98fe-d85df9c05786": Phase="Pending", Reason="", readiness=false. Elapsed: 3.364035ms May 30 21:56:16.522: INFO: Pod "pod-cc9f4823-6af6-4aa4-98fe-d85df9c05786": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006619658s May 30 21:56:18.527: INFO: Pod "pod-cc9f4823-6af6-4aa4-98fe-d85df9c05786": Phase="Running", Reason="", readiness=true. Elapsed: 4.010983428s May 30 21:56:20.531: INFO: Pod "pod-cc9f4823-6af6-4aa4-98fe-d85df9c05786": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015305344s STEP: Saw pod success May 30 21:56:20.531: INFO: Pod "pod-cc9f4823-6af6-4aa4-98fe-d85df9c05786" satisfied condition "success or failure" May 30 21:56:20.533: INFO: Trying to get logs from node jerma-worker pod pod-cc9f4823-6af6-4aa4-98fe-d85df9c05786 container test-container: STEP: delete the pod May 30 21:56:20.792: INFO: Waiting for pod pod-cc9f4823-6af6-4aa4-98fe-d85df9c05786 to disappear May 30 21:56:20.799: INFO: Pod pod-cc9f4823-6af6-4aa4-98fe-d85df9c05786 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:56:20.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6134" for this suite. • [SLOW TEST:6.534 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2843,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:56:20.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 30 21:56:21.056: INFO: Created pod &Pod{ObjectMeta:{dns-8058 dns-8058 /api/v1/namespaces/dns-8058/pods/dns-8058 4f4be923-005f-4003-adaa-9bcb68fa1cbf 20439328 0 2020-05-30 21:56:21 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl9h9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl9h9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl9h9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... May 30 21:56:25.122: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8058 PodName:dns-8058 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 21:56:25.122: INFO: >>> kubeConfig: /root/.kube/config I0530 21:56:25.162449 6 log.go:172] (0xc0019722c0) (0xc001599400) Create stream I0530 21:56:25.162485 6 log.go:172] (0xc0019722c0) (0xc001599400) Stream added, broadcasting: 1 I0530 21:56:25.164396 6 log.go:172] (0xc0019722c0) Reply frame received for 1 I0530 21:56:25.164433 6 log.go:172] (0xc0019722c0) (0xc002813b80) Create stream I0530 21:56:25.164447 6 log.go:172] (0xc0019722c0) (0xc002813b80) Stream added, broadcasting: 3 I0530 21:56:25.165707 6 log.go:172] (0xc0019722c0) Reply frame received for 3 I0530 21:56:25.165741 6 log.go:172] (0xc0019722c0) (0xc0018d4460) Create stream I0530 21:56:25.165751 6 log.go:172] (0xc0019722c0) (0xc0018d4460) Stream added, broadcasting: 5 I0530 21:56:25.166767 6 log.go:172] (0xc0019722c0) Reply frame received for 5 I0530 21:56:25.252712 6 log.go:172] (0xc0019722c0) Data frame received for 3 I0530 21:56:25.252753 6 log.go:172] (0xc002813b80) (3) Data frame handling I0530 21:56:25.252785 6 log.go:172] (0xc002813b80) (3) Data frame sent I0530 21:56:25.254353 6 log.go:172] (0xc0019722c0) Data frame received for 3 I0530 21:56:25.254377 6 log.go:172] (0xc002813b80) (3) Data frame handling I0530 21:56:25.254406 6 log.go:172] (0xc0019722c0) Data frame received for 5 I0530 21:56:25.254457 6 log.go:172] (0xc0018d4460) (5) Data frame handling I0530 21:56:25.255955 6 log.go:172] (0xc0019722c0) Data frame received for 1 I0530 21:56:25.255970 6 log.go:172] (0xc001599400) (1) Data frame handling I0530 21:56:25.255979 6 log.go:172] (0xc001599400) (1) Data frame sent I0530 21:56:25.255994 6 log.go:172] (0xc0019722c0) (0xc001599400) Stream removed, broadcasting: 1 I0530 21:56:25.256034 6 log.go:172] (0xc0019722c0) Go away received I0530 21:56:25.256072 6 log.go:172] (0xc0019722c0) (0xc001599400) Stream removed, broadcasting: 1 I0530 21:56:25.256085 6 log.go:172] (0xc0019722c0) (0xc002813b80) Stream removed, broadcasting: 3 I0530 21:56:25.256096 6 log.go:172] (0xc0019722c0) (0xc0018d4460) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 30 21:56:25.256: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8058 PodName:dns-8058 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 21:56:25.256: INFO: >>> kubeConfig: /root/.kube/config I0530 21:56:25.282785 6 log.go:172] (0xc00293c4d0) (0xc0018d4a00) Create stream I0530 21:56:25.282808 6 log.go:172] (0xc00293c4d0) (0xc0018d4a00) Stream added, broadcasting: 1 I0530 21:56:25.284840 6 log.go:172] (0xc00293c4d0) Reply frame received for 1 I0530 21:56:25.284877 6 log.go:172] (0xc00293c4d0) (0xc001599540) Create stream I0530 21:56:25.284892 6 log.go:172] (0xc00293c4d0) (0xc001599540) Stream added, broadcasting: 3 I0530 21:56:25.286375 6 log.go:172] (0xc00293c4d0) Reply frame received for 3 I0530 21:56:25.286432 6 log.go:172] (0xc00293c4d0) (0xc001599720) Create stream I0530 21:56:25.286458 6 log.go:172] (0xc00293c4d0) (0xc001599720) Stream added, broadcasting: 5 I0530 21:56:25.287586 6 log.go:172] (0xc00293c4d0) Reply frame received for 5 I0530 21:56:25.392435 6 log.go:172] (0xc00293c4d0) Data frame received for 3 I0530 21:56:25.392462 6 log.go:172] (0xc001599540) (3) Data frame handling I0530 21:56:25.392480 6 log.go:172] (0xc001599540) (3) Data frame sent I0530 21:56:25.394392 6 log.go:172] (0xc00293c4d0) Data frame received for 5 I0530 21:56:25.394419 6 log.go:172] (0xc001599720) (5) Data frame handling I0530 21:56:25.395392 6 log.go:172] (0xc00293c4d0) Data frame received for 3 I0530 21:56:25.395409 6 log.go:172] (0xc001599540) (3) Data frame handling I0530 21:56:25.396767 6 log.go:172] (0xc00293c4d0) Data frame received for 1 I0530 21:56:25.396785 6 log.go:172] (0xc0018d4a00) (1) Data frame handling I0530 21:56:25.396797 6 log.go:172] (0xc0018d4a00) (1) Data frame sent I0530 21:56:25.396807 6 log.go:172] (0xc00293c4d0) (0xc0018d4a00) Stream removed, broadcasting: 1 I0530 21:56:25.396828 6 log.go:172] (0xc00293c4d0) Go away received I0530 21:56:25.396937 6 log.go:172] (0xc00293c4d0) (0xc0018d4a00) Stream removed, broadcasting: 1 I0530 21:56:25.396992 6 log.go:172] (0xc00293c4d0) (0xc001599540) Stream removed, broadcasting: 3 I0530 21:56:25.397016 6 log.go:172] (0xc00293c4d0) (0xc001599720) Stream removed, broadcasting: 5 May 30 21:56:25.397: INFO: Deleting pod dns-8058... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:56:25.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8058" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":164,"skipped":2867,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:56:25.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:56:25.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 30 21:56:26.013: INFO: stderr: "" May 30 21:56:26.013: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:23:43Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:56:26.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9695" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":165,"skipped":2874,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:56:26.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-27accc88-6dd6-4247-96ea-5f950a4d1ec3 STEP: Creating a pod to test consume secrets May 30 21:56:26.312: INFO: Waiting up to 5m0s for pod "pod-secrets-96bb49c5-3638-4e52-9632-84d084adee80" in namespace "secrets-3623" to be "success or failure" May 30 21:56:26.394: INFO: Pod "pod-secrets-96bb49c5-3638-4e52-9632-84d084adee80": Phase="Pending", Reason="", readiness=false. Elapsed: 82.570025ms May 30 21:56:28.398: INFO: Pod "pod-secrets-96bb49c5-3638-4e52-9632-84d084adee80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086916168s May 30 21:56:30.403: INFO: Pod "pod-secrets-96bb49c5-3638-4e52-9632-84d084adee80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091276348s STEP: Saw pod success May 30 21:56:30.403: INFO: Pod "pod-secrets-96bb49c5-3638-4e52-9632-84d084adee80" satisfied condition "success or failure" May 30 21:56:30.406: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-96bb49c5-3638-4e52-9632-84d084adee80 container secret-volume-test: STEP: delete the pod May 30 21:56:30.442: INFO: Waiting for pod pod-secrets-96bb49c5-3638-4e52-9632-84d084adee80 to disappear May 30 21:56:30.446: INFO: Pod pod-secrets-96bb49c5-3638-4e52-9632-84d084adee80 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:56:30.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3623" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2891,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:56:30.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server May 30 21:56:30.515: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:56:30.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9932" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":167,"skipped":2913,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:56:30.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-95d04e63-c72e-411f-b376-10d8f0095d85 STEP: Creating a pod to test consume configMaps May 30 21:56:30.778: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f75e86de-201b-4a4a-b5f8-7cd8ad742a22" in namespace "projected-1690" to be "success or failure" May 30 21:56:30.839: INFO: Pod "pod-projected-configmaps-f75e86de-201b-4a4a-b5f8-7cd8ad742a22": Phase="Pending", Reason="", readiness=false. Elapsed: 60.670335ms May 30 21:56:32.843: INFO: Pod "pod-projected-configmaps-f75e86de-201b-4a4a-b5f8-7cd8ad742a22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064859157s May 30 21:56:34.846: INFO: Pod "pod-projected-configmaps-f75e86de-201b-4a4a-b5f8-7cd8ad742a22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068415293s STEP: Saw pod success May 30 21:56:34.846: INFO: Pod "pod-projected-configmaps-f75e86de-201b-4a4a-b5f8-7cd8ad742a22" satisfied condition "success or failure" May 30 21:56:34.849: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-f75e86de-201b-4a4a-b5f8-7cd8ad742a22 container projected-configmap-volume-test: STEP: delete the pod May 30 21:56:35.053: INFO: Waiting for pod pod-projected-configmaps-f75e86de-201b-4a4a-b5f8-7cd8ad742a22 to disappear May 30 21:56:35.191: INFO: Pod pod-projected-configmaps-f75e86de-201b-4a4a-b5f8-7cd8ad742a22 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:56:35.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1690" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2931,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:56:35.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-631.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-631.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-631.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-631.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-631.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-631.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-631.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-631.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-631.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-631.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 30 21:56:43.372: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:43.375: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:43.378: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:43.380: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:43.389: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:43.392: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:43.394: INFO: Unable to read jessie_udp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:43.397: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:43.403: INFO: Lookups using dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local wheezy_udp@dns-test-service-2.dns-631.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-631.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local jessie_udp@dns-test-service-2.dns-631.svc.cluster.local jessie_tcp@dns-test-service-2.dns-631.svc.cluster.local] May 30 21:56:48.408: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:48.413: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:48.417: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:48.420: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:48.428: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:48.431: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:48.434: INFO: Unable to read jessie_udp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:48.436: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:48.443: INFO: Lookups using dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local wheezy_udp@dns-test-service-2.dns-631.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-631.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local jessie_udp@dns-test-service-2.dns-631.svc.cluster.local jessie_tcp@dns-test-service-2.dns-631.svc.cluster.local] May 30 21:56:53.408: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:53.411: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:53.415: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:53.418: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:53.428: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:53.430: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:53.433: INFO: Unable to read jessie_udp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:53.436: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:53.442: INFO: Lookups using dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local wheezy_udp@dns-test-service-2.dns-631.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-631.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local jessie_udp@dns-test-service-2.dns-631.svc.cluster.local jessie_tcp@dns-test-service-2.dns-631.svc.cluster.local] May 30 21:56:58.432: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:58.435: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:58.439: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:58.442: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:58.451: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:58.454: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:58.457: INFO: Unable to read jessie_udp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:58.460: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:56:58.465: INFO: Lookups using dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local wheezy_udp@dns-test-service-2.dns-631.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-631.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local jessie_udp@dns-test-service-2.dns-631.svc.cluster.local jessie_tcp@dns-test-service-2.dns-631.svc.cluster.local] May 30 21:57:03.407: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:57:03.411: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:57:03.414: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:57:03.417: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:57:03.427: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:57:03.431: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:57:03.434: INFO: Unable to read jessie_udp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:57:03.442: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:57:03.448: INFO: Lookups using dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local wheezy_udp@dns-test-service-2.dns-631.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-631.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local jessie_udp@dns-test-service-2.dns-631.svc.cluster.local jessie_tcp@dns-test-service-2.dns-631.svc.cluster.local] May 30 21:57:08.408: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:57:08.411: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:57:08.415: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:57:08.418: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:57:08.428: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:57:08.431: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:57:08.442: INFO: Unable to read jessie_udp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:57:08.448: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-631.svc.cluster.local from pod dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6: the server could not find the requested resource (get pods dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6) May 30 21:57:08.456: INFO: Lookups using dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local wheezy_udp@dns-test-service-2.dns-631.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-631.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-631.svc.cluster.local jessie_udp@dns-test-service-2.dns-631.svc.cluster.local jessie_tcp@dns-test-service-2.dns-631.svc.cluster.local] May 30 21:57:13.442: INFO: DNS probes using dns-631/dns-test-d4639628-c3a1-44a5-892e-dbe3b74a4af6 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:57:14.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-631" for this suite. • [SLOW TEST:38.865 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":169,"skipped":2932,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:57:14.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-ndjz STEP: Creating a pod to test atomic-volume-subpath May 30 21:57:14.186: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-ndjz" in namespace "subpath-4970" to be "success or failure" May 30 21:57:14.189: INFO: Pod "pod-subpath-test-projected-ndjz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.997936ms May 30 21:57:16.194: INFO: Pod "pod-subpath-test-projected-ndjz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007281727s May 30 21:57:18.198: INFO: Pod "pod-subpath-test-projected-ndjz": Phase="Running", Reason="", readiness=true. Elapsed: 4.012116838s May 30 21:57:20.202: INFO: Pod "pod-subpath-test-projected-ndjz": Phase="Running", Reason="", readiness=true. Elapsed: 6.015918038s May 30 21:57:22.206: INFO: Pod "pod-subpath-test-projected-ndjz": Phase="Running", Reason="", readiness=true. Elapsed: 8.019837753s May 30 21:57:24.210: INFO: Pod "pod-subpath-test-projected-ndjz": Phase="Running", Reason="", readiness=true. Elapsed: 10.02411973s May 30 21:57:26.215: INFO: Pod "pod-subpath-test-projected-ndjz": Phase="Running", Reason="", readiness=true. Elapsed: 12.028440239s May 30 21:57:28.218: INFO: Pod "pod-subpath-test-projected-ndjz": Phase="Running", Reason="", readiness=true. Elapsed: 14.032130828s May 30 21:57:30.223: INFO: Pod "pod-subpath-test-projected-ndjz": Phase="Running", Reason="", readiness=true. Elapsed: 16.036540214s May 30 21:57:32.227: INFO: Pod "pod-subpath-test-projected-ndjz": Phase="Running", Reason="", readiness=true. Elapsed: 18.040408325s May 30 21:57:34.231: INFO: Pod "pod-subpath-test-projected-ndjz": Phase="Running", Reason="", readiness=true. Elapsed: 20.044875769s May 30 21:57:36.235: INFO: Pod "pod-subpath-test-projected-ndjz": Phase="Running", Reason="", readiness=true. Elapsed: 22.048621589s May 30 21:57:38.240: INFO: Pod "pod-subpath-test-projected-ndjz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053388527s STEP: Saw pod success May 30 21:57:38.240: INFO: Pod "pod-subpath-test-projected-ndjz" satisfied condition "success or failure" May 30 21:57:38.243: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-ndjz container test-container-subpath-projected-ndjz: STEP: delete the pod May 30 21:57:38.263: INFO: Waiting for pod pod-subpath-test-projected-ndjz to disappear May 30 21:57:38.304: INFO: Pod pod-subpath-test-projected-ndjz no longer exists STEP: Deleting pod pod-subpath-test-projected-ndjz May 30 21:57:38.304: INFO: Deleting pod "pod-subpath-test-projected-ndjz" in namespace "subpath-4970" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:57:38.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4970" for this suite. • [SLOW TEST:24.251 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":170,"skipped":2934,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:57:38.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:57:38.393: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:57:39.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8696" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":171,"skipped":2951,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:57:39.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 30 21:57:39.509: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ed65254-6dd7-4e87-936a-03f1d036b094" in namespace "projected-9528" to be "success or failure" May 30 21:57:39.527: INFO: Pod "downwardapi-volume-0ed65254-6dd7-4e87-936a-03f1d036b094": Phase="Pending", Reason="", readiness=false. Elapsed: 17.303728ms May 30 21:57:41.611: INFO: Pod "downwardapi-volume-0ed65254-6dd7-4e87-936a-03f1d036b094": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102007082s May 30 21:57:43.665: INFO: Pod "downwardapi-volume-0ed65254-6dd7-4e87-936a-03f1d036b094": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.155737621s STEP: Saw pod success May 30 21:57:43.665: INFO: Pod "downwardapi-volume-0ed65254-6dd7-4e87-936a-03f1d036b094" satisfied condition "success or failure" May 30 21:57:43.667: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0ed65254-6dd7-4e87-936a-03f1d036b094 container client-container: STEP: delete the pod May 30 21:57:43.701: INFO: Waiting for pod downwardapi-volume-0ed65254-6dd7-4e87-936a-03f1d036b094 to disappear May 30 21:57:43.726: INFO: Pod downwardapi-volume-0ed65254-6dd7-4e87-936a-03f1d036b094 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:57:43.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9528" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2956,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:57:43.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:57:43.831: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.481373ms) May 30 21:57:43.835: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.555288ms) May 30 21:57:43.839: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.253509ms) May 30 21:57:43.843: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.492815ms) May 30 21:57:43.850: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 7.017005ms) May 30 21:57:43.853: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.227333ms) May 30 21:57:43.856: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.089598ms) May 30 21:57:43.859: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.792142ms) May 30 21:57:43.862: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.054835ms) May 30 21:57:43.865: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.877251ms) May 30 21:57:43.868: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.852306ms) May 30 21:57:43.871: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.278711ms) May 30 21:57:43.875: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.549943ms) May 30 21:57:43.878: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.233874ms) May 30 21:57:43.882: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.621452ms) May 30 21:57:43.886: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.026163ms) May 30 21:57:43.889: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.208475ms) May 30 21:57:43.892: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.953595ms) May 30 21:57:43.895: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.158221ms) May 30 21:57:43.899: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.247652ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:57:43.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8108" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":173,"skipped":2969,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:57:43.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:57:43.965: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 30 21:57:46.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9798 create -f -' May 30 21:57:51.109: INFO: stderr: "" May 30 21:57:51.109: INFO: stdout: "e2e-test-crd-publish-openapi-8685-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 30 21:57:51.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9798 delete e2e-test-crd-publish-openapi-8685-crds test-foo' May 30 21:57:51.230: INFO: stderr: "" May 30 21:57:51.230: INFO: stdout: "e2e-test-crd-publish-openapi-8685-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 30 21:57:51.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9798 apply -f -' May 30 21:57:53.417: INFO: stderr: "" May 30 21:57:53.417: INFO: stdout: "e2e-test-crd-publish-openapi-8685-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 30 21:57:53.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9798 delete e2e-test-crd-publish-openapi-8685-crds test-foo' May 30 21:57:53.518: INFO: stderr: "" May 30 21:57:53.518: INFO: stdout: "e2e-test-crd-publish-openapi-8685-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 30 21:57:53.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9798 create -f -' May 30 21:57:55.049: INFO: rc: 1 May 30 21:57:55.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9798 apply -f -' May 30 21:57:56.226: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 30 21:57:56.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9798 create -f -' May 30 21:57:57.406: INFO: rc: 1 May 30 21:57:57.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9798 apply -f -' May 30 21:57:58.515: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 30 21:57:58.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8685-crds' May 30 21:57:59.761: INFO: stderr: "" May 30 21:57:59.761: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8685-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 30 21:57:59.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8685-crds.metadata' May 30 21:58:00.454: INFO: stderr: "" May 30 21:58:00.454: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8685-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 30 21:58:00.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8685-crds.spec' May 30 21:58:01.630: INFO: stderr: "" May 30 21:58:01.630: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8685-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 30 21:58:01.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8685-crds.spec.bars' May 30 21:58:03.257: INFO: stderr: "" May 30 21:58:03.257: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8685-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 30 21:58:03.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8685-crds.spec.bars2' May 30 21:58:04.324: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:58:07.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9798" for this suite. • [SLOW TEST:23.301 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":174,"skipped":2979,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:58:07.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:58:11.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5399" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":175,"skipped":3018,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:58:11.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 30 21:58:15.645: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:58:15.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8173" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":3026,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:58:15.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:58:47.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1522" for this suite. • [SLOW TEST:31.925 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":3048,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:58:47.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-dc7677e0-3e34-46de-8231-a543b6137c22 STEP: Creating a pod to test consume secrets May 30 21:58:47.745: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8bf86bdf-a01e-4601-81f4-dcdf4fcc516a" in namespace "projected-7121" to be "success or failure" May 30 21:58:47.749: INFO: Pod "pod-projected-secrets-8bf86bdf-a01e-4601-81f4-dcdf4fcc516a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.722175ms May 30 21:58:49.752: INFO: Pod "pod-projected-secrets-8bf86bdf-a01e-4601-81f4-dcdf4fcc516a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006610195s May 30 21:58:51.755: INFO: Pod "pod-projected-secrets-8bf86bdf-a01e-4601-81f4-dcdf4fcc516a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009702843s STEP: Saw pod success May 30 21:58:51.755: INFO: Pod "pod-projected-secrets-8bf86bdf-a01e-4601-81f4-dcdf4fcc516a" satisfied condition "success or failure" May 30 21:58:51.757: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-8bf86bdf-a01e-4601-81f4-dcdf4fcc516a container projected-secret-volume-test: STEP: delete the pod May 30 21:58:51.786: INFO: Waiting for pod pod-projected-secrets-8bf86bdf-a01e-4601-81f4-dcdf4fcc516a to disappear May 30 21:58:51.791: INFO: Pod pod-projected-secrets-8bf86bdf-a01e-4601-81f4-dcdf4fcc516a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:58:51.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7121" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":3054,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:58:51.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 21:58:52.402: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 21:58:54.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472732, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472732, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472732, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472732, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 21:58:57.447: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:59:09.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-45" for this suite. STEP: Destroying namespace "webhook-45-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.008 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":179,"skipped":3070,"failed":0} S ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:59:09.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-9189f0b9-984e-4002-a285-5912175fb1db STEP: Creating configMap with name cm-test-opt-upd-ff9b3505-72b2-4a8a-92b1-bc1515501ea4 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9189f0b9-984e-4002-a285-5912175fb1db STEP: Updating configmap cm-test-opt-upd-ff9b3505-72b2-4a8a-92b1-bc1515501ea4 STEP: Creating configMap with name cm-test-opt-create-03c3bbb3-f001-435b-9104-342a54d58ee2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:59:17.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3334" for this suite. • [SLOW TEST:8.197 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":3071,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:59:18.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 21:59:19.002: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 21:59:21.014: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472759, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472759, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472759, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726472758, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 21:59:24.048: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:59:24.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:59:25.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3891" for this suite. STEP: Destroying namespace "webhook-3891-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.574 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":181,"skipped":3078,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:59:25.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-740115a2-b790-4d56-97fe-ba9141eb32cc STEP: Creating the pod STEP: Updating configmap configmap-test-upd-740115a2-b790-4d56-97fe-ba9141eb32cc STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:59:32.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1023" for this suite. • [SLOW TEST:6.492 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":3088,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:59:32.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-ef57c49b-114c-4728-b139-c18bab79d585 STEP: Creating a pod to test consume configMaps May 30 21:59:32.221: INFO: Waiting up to 5m0s for pod "pod-configmaps-c1931952-fb96-4a98-b1f3-962ee8b9f7e8" in namespace "configmap-373" to be "success or failure" May 30 21:59:32.224: INFO: Pod "pod-configmaps-c1931952-fb96-4a98-b1f3-962ee8b9f7e8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.100127ms May 30 21:59:34.229: INFO: Pod "pod-configmaps-c1931952-fb96-4a98-b1f3-962ee8b9f7e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008090595s May 30 21:59:36.235: INFO: Pod "pod-configmaps-c1931952-fb96-4a98-b1f3-962ee8b9f7e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013808132s STEP: Saw pod success May 30 21:59:36.235: INFO: Pod "pod-configmaps-c1931952-fb96-4a98-b1f3-962ee8b9f7e8" satisfied condition "success or failure" May 30 21:59:36.238: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c1931952-fb96-4a98-b1f3-962ee8b9f7e8 container configmap-volume-test: STEP: delete the pod May 30 21:59:36.262: INFO: Waiting for pod pod-configmaps-c1931952-fb96-4a98-b1f3-962ee8b9f7e8 to disappear May 30 21:59:36.266: INFO: Pod pod-configmaps-c1931952-fb96-4a98-b1f3-962ee8b9f7e8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 21:59:36.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-373" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":3135,"failed":0} ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 21:59:36.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 21:59:36.505: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 30 21:59:36.550: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:36.560: INFO: Number of nodes with available pods: 0 May 30 21:59:36.560: INFO: Node jerma-worker is running more than one daemon pod May 30 21:59:37.570: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:37.572: INFO: Number of nodes with available pods: 0 May 30 21:59:37.572: INFO: Node jerma-worker is running more than one daemon pod May 30 21:59:38.566: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:38.569: INFO: Number of nodes with available pods: 0 May 30 21:59:38.569: INFO: Node jerma-worker is running more than one daemon pod May 30 21:59:39.580: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:39.583: INFO: Number of nodes with available pods: 0 May 30 21:59:39.583: INFO: Node jerma-worker is running more than one daemon pod May 30 21:59:40.566: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:40.570: INFO: Number of nodes with available pods: 0 May 30 21:59:40.570: INFO: Node jerma-worker is running more than one daemon pod May 30 21:59:41.620: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:41.623: INFO: Number of nodes with available pods: 2 May 30 21:59:41.623: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 30 21:59:41.658: INFO: Wrong image for pod: daemon-set-kxmx7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:41.658: INFO: Wrong image for pod: daemon-set-q7qg4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:41.690: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:42.693: INFO: Wrong image for pod: daemon-set-kxmx7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:42.693: INFO: Wrong image for pod: daemon-set-q7qg4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:42.695: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:43.694: INFO: Wrong image for pod: daemon-set-kxmx7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:43.694: INFO: Wrong image for pod: daemon-set-q7qg4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:43.697: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:44.694: INFO: Wrong image for pod: daemon-set-kxmx7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:44.695: INFO: Wrong image for pod: daemon-set-q7qg4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:44.699: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:45.694: INFO: Wrong image for pod: daemon-set-kxmx7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:45.694: INFO: Wrong image for pod: daemon-set-q7qg4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:45.694: INFO: Pod daemon-set-q7qg4 is not available May 30 21:59:45.698: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:46.694: INFO: Wrong image for pod: daemon-set-kxmx7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:46.694: INFO: Wrong image for pod: daemon-set-q7qg4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:46.694: INFO: Pod daemon-set-q7qg4 is not available May 30 21:59:46.699: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:47.693: INFO: Wrong image for pod: daemon-set-kxmx7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:47.693: INFO: Wrong image for pod: daemon-set-q7qg4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:47.693: INFO: Pod daemon-set-q7qg4 is not available May 30 21:59:47.697: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:48.694: INFO: Wrong image for pod: daemon-set-kxmx7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:48.695: INFO: Wrong image for pod: daemon-set-q7qg4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:48.695: INFO: Pod daemon-set-q7qg4 is not available May 30 21:59:48.700: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:49.695: INFO: Pod daemon-set-kmtzv is not available May 30 21:59:49.695: INFO: Wrong image for pod: daemon-set-kxmx7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:49.699: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:50.694: INFO: Pod daemon-set-kmtzv is not available May 30 21:59:50.694: INFO: Wrong image for pod: daemon-set-kxmx7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:50.698: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:51.694: INFO: Pod daemon-set-kmtzv is not available May 30 21:59:51.694: INFO: Wrong image for pod: daemon-set-kxmx7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:51.697: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:52.695: INFO: Wrong image for pod: daemon-set-kxmx7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:52.700: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:53.694: INFO: Wrong image for pod: daemon-set-kxmx7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:53.694: INFO: Pod daemon-set-kxmx7 is not available May 30 21:59:53.699: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:54.695: INFO: Wrong image for pod: daemon-set-kxmx7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:54.695: INFO: Pod daemon-set-kxmx7 is not available May 30 21:59:54.699: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:55.695: INFO: Wrong image for pod: daemon-set-kxmx7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:55.695: INFO: Pod daemon-set-kxmx7 is not available May 30 21:59:55.700: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:56.694: INFO: Wrong image for pod: daemon-set-kxmx7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:56.694: INFO: Pod daemon-set-kxmx7 is not available May 30 21:59:56.698: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:57.696: INFO: Wrong image for pod: daemon-set-kxmx7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:57.696: INFO: Pod daemon-set-kxmx7 is not available May 30 21:59:57.700: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:58.695: INFO: Wrong image for pod: daemon-set-kxmx7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 30 21:59:58.695: INFO: Pod daemon-set-kxmx7 is not available May 30 21:59:58.700: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:59.695: INFO: Pod daemon-set-z8g8n is not available May 30 21:59:59.700: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 30 21:59:59.703: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 21:59:59.706: INFO: Number of nodes with available pods: 1 May 30 21:59:59.706: INFO: Node jerma-worker is running more than one daemon pod May 30 22:00:00.722: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:00:00.725: INFO: Number of nodes with available pods: 1 May 30 22:00:00.725: INFO: Node jerma-worker is running more than one daemon pod May 30 22:00:01.711: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:00:01.713: INFO: Number of nodes with available pods: 1 May 30 22:00:01.713: INFO: Node jerma-worker is running more than one daemon pod May 30 22:00:02.712: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:00:02.716: INFO: Number of nodes with available pods: 2 May 30 22:00:02.716: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7345, will wait for the garbage collector to delete the pods May 30 22:00:02.791: INFO: Deleting DaemonSet.extensions daemon-set took: 6.972243ms May 30 22:00:03.191: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.377495ms May 30 22:00:09.595: INFO: Number of nodes with available pods: 0 May 30 22:00:09.595: INFO: Number of running nodes: 0, number of available pods: 0 May 30 22:00:09.598: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7345/daemonsets","resourceVersion":"20440717"},"items":null} May 30 22:00:09.601: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7345/pods","resourceVersion":"20440717"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:00:09.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7345" for this suite. • [SLOW TEST:33.344 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":184,"skipped":3135,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:00:09.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 30 22:00:14.287: INFO: Successfully updated pod "pod-update-d26a2831-672b-4c75-b2f8-94b170a3e16c" STEP: verifying the updated pod is in kubernetes May 30 22:00:14.300: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:00:14.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-685" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3146,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:00:14.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 22:00:14.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 30 22:00:15.073: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-30T22:00:15Z generation:1 name:name1 resourceVersion:20440785 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:1b14e467-7f34-457b-8599-cbe33d2ea01c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 30 22:00:25.078: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-30T22:00:25Z generation:1 name:name2 resourceVersion:20440827 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:323bd707-9ff0-440c-8e99-be4a7f88d694] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 30 22:00:35.083: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-30T22:00:15Z generation:2 name:name1 resourceVersion:20440859 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:1b14e467-7f34-457b-8599-cbe33d2ea01c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 30 22:00:45.090: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-30T22:00:25Z generation:2 name:name2 resourceVersion:20440890 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:323bd707-9ff0-440c-8e99-be4a7f88d694] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 30 22:00:55.098: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-30T22:00:15Z generation:2 name:name1 resourceVersion:20440919 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:1b14e467-7f34-457b-8599-cbe33d2ea01c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 30 22:01:05.107: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-30T22:00:25Z generation:2 name:name2 resourceVersion:20440949 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:323bd707-9ff0-440c-8e99-be4a7f88d694] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:01:15.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-2711" for this suite. • [SLOW TEST:61.328 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":186,"skipped":3188,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:01:15.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 30 22:01:21.738: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2121 PodName:pod-sharedvolume-d959fd79-4e65-4e6d-8fe3-f7a1539ad6d3 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 22:01:21.738: INFO: >>> kubeConfig: /root/.kube/config I0530 22:01:21.773916 6 log.go:172] (0xc00293c630) (0xc0016be0a0) Create stream I0530 22:01:21.773942 6 log.go:172] (0xc00293c630) (0xc0016be0a0) Stream added, broadcasting: 1 I0530 22:01:21.775638 6 log.go:172] (0xc00293c630) Reply frame received for 1 I0530 22:01:21.775677 6 log.go:172] (0xc00293c630) (0xc001ce6000) Create stream I0530 22:01:21.775697 6 log.go:172] (0xc00293c630) (0xc001ce6000) Stream added, broadcasting: 3 I0530 22:01:21.777231 6 log.go:172] (0xc00293c630) Reply frame received for 3 I0530 22:01:21.777268 6 log.go:172] (0xc00293c630) (0xc0016be280) Create stream I0530 22:01:21.777293 6 log.go:172] (0xc00293c630) (0xc0016be280) Stream added, broadcasting: 5 I0530 22:01:21.778566 6 log.go:172] (0xc00293c630) Reply frame received for 5 I0530 22:01:21.871714 6 log.go:172] (0xc00293c630) Data frame received for 5 I0530 22:01:21.871745 6 log.go:172] (0xc0016be280) (5) Data frame handling I0530 22:01:21.871771 6 log.go:172] (0xc00293c630) Data frame received for 3 I0530 22:01:21.871784 6 log.go:172] (0xc001ce6000) (3) Data frame handling I0530 22:01:21.871798 6 log.go:172] (0xc001ce6000) (3) Data frame sent I0530 22:01:21.871810 6 log.go:172] (0xc00293c630) Data frame received for 3 I0530 22:01:21.871821 6 log.go:172] (0xc001ce6000) (3) Data frame handling I0530 22:01:21.873964 6 log.go:172] (0xc00293c630) Data frame received for 1 I0530 22:01:21.874000 6 log.go:172] (0xc0016be0a0) (1) Data frame handling I0530 22:01:21.874018 6 log.go:172] (0xc0016be0a0) (1) Data frame sent I0530 22:01:21.874037 6 log.go:172] (0xc00293c630) (0xc0016be0a0) Stream removed, broadcasting: 1 I0530 22:01:21.874054 6 log.go:172] (0xc00293c630) Go away received I0530 22:01:21.874264 6 log.go:172] (0xc00293c630) (0xc0016be0a0) Stream removed, broadcasting: 1 I0530 22:01:21.874292 6 log.go:172] (0xc00293c630) (0xc001ce6000) Stream removed, broadcasting: 3 I0530 22:01:21.874313 6 log.go:172] (0xc00293c630) (0xc0016be280) Stream removed, broadcasting: 5 May 30 22:01:21.874: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:01:21.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2121" for this suite. • [SLOW TEST:6.247 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":187,"skipped":3203,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:01:21.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 30 22:01:21.996: INFO: Waiting up to 5m0s for pod "downward-api-5a61fe02-1f53-4cf8-a6dd-8b478c22b641" in namespace "downward-api-988" to be "success or failure" May 30 22:01:22.012: INFO: Pod "downward-api-5a61fe02-1f53-4cf8-a6dd-8b478c22b641": Phase="Pending", Reason="", readiness=false. Elapsed: 15.948222ms May 30 22:01:24.016: INFO: Pod "downward-api-5a61fe02-1f53-4cf8-a6dd-8b478c22b641": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020463928s May 30 22:01:26.021: INFO: Pod "downward-api-5a61fe02-1f53-4cf8-a6dd-8b478c22b641": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024849487s STEP: Saw pod success May 30 22:01:26.021: INFO: Pod "downward-api-5a61fe02-1f53-4cf8-a6dd-8b478c22b641" satisfied condition "success or failure" May 30 22:01:26.024: INFO: Trying to get logs from node jerma-worker2 pod downward-api-5a61fe02-1f53-4cf8-a6dd-8b478c22b641 container dapi-container: STEP: delete the pod May 30 22:01:26.082: INFO: Waiting for pod downward-api-5a61fe02-1f53-4cf8-a6dd-8b478c22b641 to disappear May 30 22:01:26.096: INFO: Pod downward-api-5a61fe02-1f53-4cf8-a6dd-8b478c22b641 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:01:26.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-988" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3234,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:01:26.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0530 22:01:27.316456 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 30 22:01:27.316: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:01:27.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3667" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":189,"skipped":3246,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:01:27.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 30 22:01:27.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3780' May 30 22:01:27.651: INFO: stderr: "" May 30 22:01:27.651: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 May 30 22:01:27.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3780' May 30 22:01:32.374: INFO: stderr: "" May 30 22:01:32.374: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:01:32.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3780" for this suite. • [SLOW TEST:5.019 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":190,"skipped":3247,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:01:32.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 30 22:01:32.590: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:01:48.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2805" for this suite. • [SLOW TEST:16.289 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":191,"skipped":3299,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:01:48.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 30 22:01:48.812: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:02:03.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9490" for this suite. • [SLOW TEST:14.680 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":192,"skipped":3302,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:02:03.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 22:02:03.453: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-6d87e6d3-0533-4678-980f-dfbe13fad01a" in namespace "security-context-test-5458" to be "success or failure" May 30 22:02:03.457: INFO: Pod "busybox-privileged-false-6d87e6d3-0533-4678-980f-dfbe13fad01a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117975ms May 30 22:02:05.462: INFO: Pod "busybox-privileged-false-6d87e6d3-0533-4678-980f-dfbe13fad01a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008538564s May 30 22:02:07.465: INFO: Pod "busybox-privileged-false-6d87e6d3-0533-4678-980f-dfbe13fad01a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011688179s May 30 22:02:07.465: INFO: Pod "busybox-privileged-false-6d87e6d3-0533-4678-980f-dfbe13fad01a" satisfied condition "success or failure" May 30 22:02:07.487: INFO: Got logs for pod "busybox-privileged-false-6d87e6d3-0533-4678-980f-dfbe13fad01a": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:02:07.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5458" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3309,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:02:07.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 30 22:02:07.642: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:02:07.672: INFO: Number of nodes with available pods: 0 May 30 22:02:07.672: INFO: Node jerma-worker is running more than one daemon pod May 30 22:02:08.678: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:02:08.682: INFO: Number of nodes with available pods: 0 May 30 22:02:08.682: INFO: Node jerma-worker is running more than one daemon pod May 30 22:02:09.676: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:02:09.679: INFO: Number of nodes with available pods: 0 May 30 22:02:09.679: INFO: Node jerma-worker is running more than one daemon pod May 30 22:02:10.677: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:02:10.680: INFO: Number of nodes with available pods: 0 May 30 22:02:10.680: INFO: Node jerma-worker is running more than one daemon pod May 30 22:02:11.678: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:02:11.682: INFO: Number of nodes with available pods: 1 May 30 22:02:11.682: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:02:12.686: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:02:12.696: INFO: Number of nodes with available pods: 2 May 30 22:02:12.696: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 30 22:02:12.732: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:02:12.745: INFO: Number of nodes with available pods: 1 May 30 22:02:12.745: INFO: Node jerma-worker is running more than one daemon pod May 30 22:02:13.750: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:02:13.753: INFO: Number of nodes with available pods: 1 May 30 22:02:13.753: INFO: Node jerma-worker is running more than one daemon pod May 30 22:02:14.750: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:02:14.754: INFO: Number of nodes with available pods: 1 May 30 22:02:14.754: INFO: Node jerma-worker is running more than one daemon pod May 30 22:02:15.750: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:02:15.753: INFO: Number of nodes with available pods: 2 May 30 22:02:15.753: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1223, will wait for the garbage collector to delete the pods May 30 22:02:15.818: INFO: Deleting DaemonSet.extensions daemon-set took: 6.64387ms May 30 22:02:16.118: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.257923ms May 30 22:02:29.621: INFO: Number of nodes with available pods: 0 May 30 22:02:29.621: INFO: Number of running nodes: 0, number of available pods: 0 May 30 22:02:29.624: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1223/daemonsets","resourceVersion":"20441442"},"items":null} May 30 22:02:29.651: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1223/pods","resourceVersion":"20441442"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:02:29.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1223" for this suite. • [SLOW TEST:22.150 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":194,"skipped":3319,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:02:29.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0530 22:02:39.852136 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 30 22:02:39.852: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:02:39.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4016" for this suite. • [SLOW TEST:10.191 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":195,"skipped":3353,"failed":0} SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:02:39.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 30 22:02:40.443: INFO: created pod pod-service-account-defaultsa May 30 22:02:40.443: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 30 22:02:40.451: INFO: created pod pod-service-account-mountsa May 30 22:02:40.451: INFO: pod pod-service-account-mountsa service account token volume mount: true May 30 22:02:40.457: INFO: created pod pod-service-account-nomountsa May 30 22:02:40.457: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 30 22:02:40.514: INFO: created pod pod-service-account-defaultsa-mountspec May 30 22:02:40.514: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 30 22:02:40.549: INFO: created pod pod-service-account-mountsa-mountspec May 30 22:02:40.549: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 30 22:02:40.566: INFO: created pod pod-service-account-nomountsa-mountspec May 30 22:02:40.566: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 30 22:02:40.608: INFO: created pod pod-service-account-defaultsa-nomountspec May 30 22:02:40.608: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 30 22:02:40.651: INFO: created pod pod-service-account-mountsa-nomountspec May 30 22:02:40.651: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 30 22:02:40.688: INFO: created pod pod-service-account-nomountsa-nomountspec May 30 22:02:40.688: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:02:40.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6170" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":196,"skipped":3360,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:02:40.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 22:02:41.011: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 30 22:02:42.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1519 create -f -' May 30 22:02:56.594: INFO: stderr: "" May 30 22:02:56.594: INFO: stdout: "e2e-test-crd-publish-openapi-109-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 30 22:02:56.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1519 delete e2e-test-crd-publish-openapi-109-crds test-cr' May 30 22:02:56.703: INFO: stderr: "" May 30 22:02:56.703: INFO: stdout: "e2e-test-crd-publish-openapi-109-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 30 22:02:56.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1519 apply -f -' May 30 22:02:59.474: INFO: stderr: "" May 30 22:02:59.474: INFO: stdout: "e2e-test-crd-publish-openapi-109-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 30 22:02:59.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1519 delete e2e-test-crd-publish-openapi-109-crds test-cr' May 30 22:02:59.576: INFO: stderr: "" May 30 22:02:59.576: INFO: stdout: "e2e-test-crd-publish-openapi-109-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 30 22:02:59.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-109-crds' May 30 22:03:01.791: INFO: stderr: "" May 30 22:03:01.791: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-109-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:03:03.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1519" for this suite. • [SLOW TEST:22.804 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":197,"skipped":3364,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:03:03.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 22:03:21.790: INFO: Container started at 2020-05-30 22:03:06 +0000 UTC, pod became ready at 2020-05-30 22:03:21 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:03:21.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2381" for this suite. • [SLOW TEST:18.122 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3369,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:03:21.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 30 22:03:21.891: INFO: PodSpec: initContainers in spec.initContainers May 30 22:04:18.100: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0eeb9eb4-99fe-4fbb-a83f-621840d6d933", GenerateName:"", Namespace:"init-container-572", SelfLink:"/api/v1/namespaces/init-container-572/pods/pod-init-0eeb9eb4-99fe-4fbb-a83f-621840d6d933", UID:"a7bf20b2-72c0-43f7-b851-115bbebd1c06", ResourceVersion:"20442004", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726473001, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"891969599"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2mkrx", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002d2b440), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2mkrx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2mkrx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2mkrx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc005997a68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0026d0120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005997b10)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005997b30)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc005997b38), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc005997b3c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473002, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473002, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473002, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473001, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.88", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.88"}}, StartTime:(*v1.Time)(0xc005c4fec0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc005c4ff00), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0022cea80)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://1e8fc0e847d267662643f7250c41c1653fd7459064a50aa7315ccaf84e503526", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc005c4ff20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc005c4fee0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc005997bef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:04:18.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-572" for this suite. • [SLOW TEST:56.362 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":199,"skipped":3374,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:04:18.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 22:04:18.252: INFO: Waiting up to 5m0s for pod "busybox-user-65534-49e0f360-a805-4ec9-9ba3-ecf440a01e86" in namespace "security-context-test-4995" to be "success or failure" May 30 22:04:18.282: INFO: Pod "busybox-user-65534-49e0f360-a805-4ec9-9ba3-ecf440a01e86": Phase="Pending", Reason="", readiness=false. Elapsed: 29.152216ms May 30 22:04:20.285: INFO: Pod "busybox-user-65534-49e0f360-a805-4ec9-9ba3-ecf440a01e86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032492679s May 30 22:04:22.289: INFO: Pod "busybox-user-65534-49e0f360-a805-4ec9-9ba3-ecf440a01e86": Phase="Running", Reason="", readiness=true. Elapsed: 4.036673153s May 30 22:04:24.293: INFO: Pod "busybox-user-65534-49e0f360-a805-4ec9-9ba3-ecf440a01e86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040535046s May 30 22:04:24.293: INFO: Pod "busybox-user-65534-49e0f360-a805-4ec9-9ba3-ecf440a01e86" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:04:24.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4995" for this suite. • [SLOW TEST:6.141 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3375,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:04:24.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 30 22:04:24.405: INFO: Waiting up to 5m0s for pod "downward-api-dd66b9d8-46b5-45cd-aaf6-91f3e93e75e7" in namespace "downward-api-1118" to be "success or failure" May 30 22:04:24.412: INFO: Pod "downward-api-dd66b9d8-46b5-45cd-aaf6-91f3e93e75e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.662765ms May 30 22:04:26.426: INFO: Pod "downward-api-dd66b9d8-46b5-45cd-aaf6-91f3e93e75e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020617745s May 30 22:04:28.430: INFO: Pod "downward-api-dd66b9d8-46b5-45cd-aaf6-91f3e93e75e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024636715s STEP: Saw pod success May 30 22:04:28.430: INFO: Pod "downward-api-dd66b9d8-46b5-45cd-aaf6-91f3e93e75e7" satisfied condition "success or failure" May 30 22:04:28.433: INFO: Trying to get logs from node jerma-worker pod downward-api-dd66b9d8-46b5-45cd-aaf6-91f3e93e75e7 container dapi-container: STEP: delete the pod May 30 22:04:28.492: INFO: Waiting for pod downward-api-dd66b9d8-46b5-45cd-aaf6-91f3e93e75e7 to disappear May 30 22:04:28.539: INFO: Pod downward-api-dd66b9d8-46b5-45cd-aaf6-91f3e93e75e7 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:04:28.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1118" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3390,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:04:28.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs May 30 22:04:28.763: INFO: Waiting up to 5m0s for pod "pod-60564596-66f4-4304-84e0-86b63a099c61" in namespace "emptydir-2665" to be "success or failure" May 30 22:04:28.791: INFO: Pod "pod-60564596-66f4-4304-84e0-86b63a099c61": Phase="Pending", Reason="", readiness=false. Elapsed: 27.481514ms May 30 22:04:30.795: INFO: Pod "pod-60564596-66f4-4304-84e0-86b63a099c61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031687728s May 30 22:04:32.799: INFO: Pod "pod-60564596-66f4-4304-84e0-86b63a099c61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035675316s STEP: Saw pod success May 30 22:04:32.799: INFO: Pod "pod-60564596-66f4-4304-84e0-86b63a099c61" satisfied condition "success or failure" May 30 22:04:32.802: INFO: Trying to get logs from node jerma-worker2 pod pod-60564596-66f4-4304-84e0-86b63a099c61 container test-container: STEP: delete the pod May 30 22:04:32.858: INFO: Waiting for pod pod-60564596-66f4-4304-84e0-86b63a099c61 to disappear May 30 22:04:32.881: INFO: Pod pod-60564596-66f4-4304-84e0-86b63a099c61 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:04:32.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2665" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3392,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:04:32.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 30 22:04:32.938: INFO: Waiting up to 5m0s for pod "downward-api-5577c712-35da-462e-911b-f18c28d5d938" in namespace "downward-api-3604" to be "success or failure" May 30 22:04:32.976: INFO: Pod "downward-api-5577c712-35da-462e-911b-f18c28d5d938": Phase="Pending", Reason="", readiness=false. Elapsed: 38.003221ms May 30 22:04:35.029: INFO: Pod "downward-api-5577c712-35da-462e-911b-f18c28d5d938": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090857435s May 30 22:04:37.034: INFO: Pod "downward-api-5577c712-35da-462e-911b-f18c28d5d938": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095089859s STEP: Saw pod success May 30 22:04:37.034: INFO: Pod "downward-api-5577c712-35da-462e-911b-f18c28d5d938" satisfied condition "success or failure" May 30 22:04:37.036: INFO: Trying to get logs from node jerma-worker2 pod downward-api-5577c712-35da-462e-911b-f18c28d5d938 container dapi-container: STEP: delete the pod May 30 22:04:37.054: INFO: Waiting for pod downward-api-5577c712-35da-462e-911b-f18c28d5d938 to disappear May 30 22:04:37.083: INFO: Pod downward-api-5577c712-35da-462e-911b-f18c28d5d938 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:04:37.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3604" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3399,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:04:37.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 30 22:04:37.181: INFO: Waiting up to 5m0s for pod "downwardapi-volume-969d9585-f68e-4b39-9c8f-347caa69fc8b" in namespace "projected-9677" to be "success or failure" May 30 22:04:37.200: INFO: Pod "downwardapi-volume-969d9585-f68e-4b39-9c8f-347caa69fc8b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.830724ms May 30 22:04:39.204: INFO: Pod "downwardapi-volume-969d9585-f68e-4b39-9c8f-347caa69fc8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022362355s May 30 22:04:41.209: INFO: Pod "downwardapi-volume-969d9585-f68e-4b39-9c8f-347caa69fc8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027292873s STEP: Saw pod success May 30 22:04:41.209: INFO: Pod "downwardapi-volume-969d9585-f68e-4b39-9c8f-347caa69fc8b" satisfied condition "success or failure" May 30 22:04:41.212: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-969d9585-f68e-4b39-9c8f-347caa69fc8b container client-container: STEP: delete the pod May 30 22:04:41.271: INFO: Waiting for pod downwardapi-volume-969d9585-f68e-4b39-9c8f-347caa69fc8b to disappear May 30 22:04:41.274: INFO: Pod downwardapi-volume-969d9585-f68e-4b39-9c8f-347caa69fc8b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:04:41.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9677" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3461,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:04:41.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 30 22:04:41.406: INFO: Waiting up to 5m0s for pod "pod-f69d1cb5-a5e8-4518-b952-7648db2befb4" in namespace "emptydir-6838" to be "success or failure" May 30 22:04:41.442: INFO: Pod "pod-f69d1cb5-a5e8-4518-b952-7648db2befb4": Phase="Pending", Reason="", readiness=false. Elapsed: 34.976993ms May 30 22:04:43.446: INFO: Pod "pod-f69d1cb5-a5e8-4518-b952-7648db2befb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039372913s May 30 22:04:45.582: INFO: Pod "pod-f69d1cb5-a5e8-4518-b952-7648db2befb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.175187062s STEP: Saw pod success May 30 22:04:45.582: INFO: Pod "pod-f69d1cb5-a5e8-4518-b952-7648db2befb4" satisfied condition "success or failure" May 30 22:04:45.585: INFO: Trying to get logs from node jerma-worker2 pod pod-f69d1cb5-a5e8-4518-b952-7648db2befb4 container test-container: STEP: delete the pod May 30 22:04:45.680: INFO: Waiting for pod pod-f69d1cb5-a5e8-4518-b952-7648db2befb4 to disappear May 30 22:04:45.731: INFO: Pod pod-f69d1cb5-a5e8-4518-b952-7648db2befb4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:04:45.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6838" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3515,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:04:45.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 30 22:04:45.810: INFO: Waiting up to 5m0s for pod "pod-8142b8a2-49e9-4d4c-8d23-a77f144b5cac" in namespace "emptydir-3930" to be "success or failure" May 30 22:04:45.821: INFO: Pod "pod-8142b8a2-49e9-4d4c-8d23-a77f144b5cac": Phase="Pending", Reason="", readiness=false. Elapsed: 10.966028ms May 30 22:04:47.977: INFO: Pod "pod-8142b8a2-49e9-4d4c-8d23-a77f144b5cac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167105834s May 30 22:04:49.982: INFO: Pod "pod-8142b8a2-49e9-4d4c-8d23-a77f144b5cac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.171600586s STEP: Saw pod success May 30 22:04:49.982: INFO: Pod "pod-8142b8a2-49e9-4d4c-8d23-a77f144b5cac" satisfied condition "success or failure" May 30 22:04:49.985: INFO: Trying to get logs from node jerma-worker2 pod pod-8142b8a2-49e9-4d4c-8d23-a77f144b5cac container test-container: STEP: delete the pod May 30 22:04:50.074: INFO: Waiting for pod pod-8142b8a2-49e9-4d4c-8d23-a77f144b5cac to disappear May 30 22:04:50.079: INFO: Pod pod-8142b8a2-49e9-4d4c-8d23-a77f144b5cac no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:04:50.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3930" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3519,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:04:50.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 22:04:50.176: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:04:56.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-613" for this suite. • [SLOW TEST:6.572 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":207,"skipped":3523,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:04:56.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 30 22:04:56.745: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4e5fb839-8cf0-4108-afe6-447a1268eea4" in namespace "downward-api-8778" to be "success or failure" May 30 22:04:56.797: INFO: Pod "downwardapi-volume-4e5fb839-8cf0-4108-afe6-447a1268eea4": Phase="Pending", Reason="", readiness=false. Elapsed: 52.431024ms May 30 22:04:58.802: INFO: Pod "downwardapi-volume-4e5fb839-8cf0-4108-afe6-447a1268eea4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057054472s May 30 22:05:00.806: INFO: Pod "downwardapi-volume-4e5fb839-8cf0-4108-afe6-447a1268eea4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061336242s STEP: Saw pod success May 30 22:05:00.806: INFO: Pod "downwardapi-volume-4e5fb839-8cf0-4108-afe6-447a1268eea4" satisfied condition "success or failure" May 30 22:05:00.810: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4e5fb839-8cf0-4108-afe6-447a1268eea4 container client-container: STEP: delete the pod May 30 22:05:00.997: INFO: Waiting for pod downwardapi-volume-4e5fb839-8cf0-4108-afe6-447a1268eea4 to disappear May 30 22:05:01.080: INFO: Pod downwardapi-volume-4e5fb839-8cf0-4108-afe6-447a1268eea4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:05:01.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8778" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3529,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:05:01.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-e8b89cca-2b3d-41f0-b6e8-a2828f2f48fd in namespace container-probe-6744 May 30 22:05:05.247: INFO: Started pod liveness-e8b89cca-2b3d-41f0-b6e8-a2828f2f48fd in namespace container-probe-6744 STEP: checking the pod's current state and verifying that restartCount is present May 30 22:05:05.254: INFO: Initial restart count of pod liveness-e8b89cca-2b3d-41f0-b6e8-a2828f2f48fd is 0 May 30 22:05:29.305: INFO: Restart count of pod container-probe-6744/liveness-e8b89cca-2b3d-41f0-b6e8-a2828f2f48fd is now 1 (24.051837445s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:05:29.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6744" for this suite. • [SLOW TEST:28.268 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3541,"failed":0} S ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:05:29.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 22:05:29.422: INFO: Creating deployment "test-recreate-deployment" May 30 22:05:29.425: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 30 22:05:29.430: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created May 30 22:05:31.437: INFO: Waiting deployment "test-recreate-deployment" to complete May 30 22:05:31.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473129, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473129, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473129, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473129, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 22:05:33.443: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 30 22:05:33.449: INFO: Updating deployment test-recreate-deployment May 30 22:05:33.449: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 30 22:05:34.080: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-1041 /apis/apps/v1/namespaces/deployment-1041/deployments/test-recreate-deployment 53e607c5-7b56-436b-88b2-ff6ae2f291c2 20442552 2 2020-05-30 22:05:29 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cec458 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-30 22:05:33 +0000 UTC,LastTransitionTime:2020-05-30 22:05:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-05-30 22:05:33 +0000 UTC,LastTransitionTime:2020-05-30 22:05:29 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 30 22:05:34.084: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-1041 /apis/apps/v1/namespaces/deployment-1041/replicasets/test-recreate-deployment-5f94c574ff 0cce43f2-b55b-46ca-bc83-25fe43cfd020 20442550 1 2020-05-30 22:05:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 53e607c5-7b56-436b-88b2-ff6ae2f291c2 0xc002cb7717 0xc002cb7718}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cb7778 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 30 22:05:34.084: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 30 22:05:34.084: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-1041 /apis/apps/v1/namespaces/deployment-1041/replicasets/test-recreate-deployment-799c574856 73712aff-7d51-451e-a2bf-15bd10404415 20442541 2 2020-05-30 22:05:29 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 53e607c5-7b56-436b-88b2-ff6ae2f291c2 0xc002cb77e7 0xc002cb77e8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cb7858 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 30 22:05:34.088: INFO: Pod "test-recreate-deployment-5f94c574ff-qj4st" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-qj4st test-recreate-deployment-5f94c574ff- deployment-1041 /api/v1/namespaces/deployment-1041/pods/test-recreate-deployment-5f94c574ff-qj4st 99b8d5b2-2452-425d-a449-ba4348edea1f 20442553 0 2020-05-30 22:05:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 0cce43f2-b55b-46ca-bc83-25fe43cfd020 0xc002cb7ca7 0xc002cb7ca8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tlzmb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tlzmb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tlzmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:05:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:05:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:05:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:05:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-30 22:05:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:05:34.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1041" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":210,"skipped":3542,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:05:34.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 30 22:05:34.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3327' May 30 22:05:35.527: INFO: stderr: "" May 30 22:05:35.527: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 30 22:05:35.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3327' May 30 22:05:35.669: INFO: stderr: "" May 30 22:05:35.669: INFO: stdout: "update-demo-nautilus-g8g5f update-demo-nautilus-nxdqx " May 30 22:05:35.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g8g5f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3327' May 30 22:05:35.760: INFO: stderr: "" May 30 22:05:35.760: INFO: stdout: "" May 30 22:05:35.760: INFO: update-demo-nautilus-g8g5f is created but not running May 30 22:05:40.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3327' May 30 22:05:40.867: INFO: stderr: "" May 30 22:05:40.867: INFO: stdout: "update-demo-nautilus-g8g5f update-demo-nautilus-nxdqx " May 30 22:05:40.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g8g5f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3327' May 30 22:05:41.039: INFO: stderr: "" May 30 22:05:41.039: INFO: stdout: "true" May 30 22:05:41.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g8g5f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3327' May 30 22:05:41.137: INFO: stderr: "" May 30 22:05:41.137: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 30 22:05:41.137: INFO: validating pod update-demo-nautilus-g8g5f May 30 22:05:41.157: INFO: got data: { "image": "nautilus.jpg" } May 30 22:05:41.157: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 30 22:05:41.157: INFO: update-demo-nautilus-g8g5f is verified up and running May 30 22:05:41.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nxdqx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3327' May 30 22:05:41.277: INFO: stderr: "" May 30 22:05:41.277: INFO: stdout: "true" May 30 22:05:41.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nxdqx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3327' May 30 22:05:41.376: INFO: stderr: "" May 30 22:05:41.376: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 30 22:05:41.376: INFO: validating pod update-demo-nautilus-nxdqx May 30 22:05:41.380: INFO: got data: { "image": "nautilus.jpg" } May 30 22:05:41.380: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 30 22:05:41.380: INFO: update-demo-nautilus-nxdqx is verified up and running STEP: using delete to clean up resources May 30 22:05:41.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3327' May 30 22:05:41.472: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 30 22:05:41.472: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 30 22:05:41.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3327' May 30 22:05:41.564: INFO: stderr: "No resources found in kubectl-3327 namespace.\n" May 30 22:05:41.564: INFO: stdout: "" May 30 22:05:41.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3327 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 30 22:05:41.665: INFO: stderr: "" May 30 22:05:41.665: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:05:41.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3327" for this suite. • [SLOW TEST:7.571 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":211,"skipped":3559,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:05:41.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 30 22:05:41.758: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f43f19ec-7fc1-4674-9a6f-7d36397f1fdb" in namespace "downward-api-8148" to be "success or failure" May 30 22:05:41.767: INFO: Pod "downwardapi-volume-f43f19ec-7fc1-4674-9a6f-7d36397f1fdb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.188124ms May 30 22:05:43.771: INFO: Pod "downwardapi-volume-f43f19ec-7fc1-4674-9a6f-7d36397f1fdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013323677s May 30 22:05:45.774: INFO: Pod "downwardapi-volume-f43f19ec-7fc1-4674-9a6f-7d36397f1fdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016254263s STEP: Saw pod success May 30 22:05:45.774: INFO: Pod "downwardapi-volume-f43f19ec-7fc1-4674-9a6f-7d36397f1fdb" satisfied condition "success or failure" May 30 22:05:45.776: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-f43f19ec-7fc1-4674-9a6f-7d36397f1fdb container client-container: STEP: delete the pod May 30 22:05:45.807: INFO: Waiting for pod downwardapi-volume-f43f19ec-7fc1-4674-9a6f-7d36397f1fdb to disappear May 30 22:05:45.840: INFO: Pod downwardapi-volume-f43f19ec-7fc1-4674-9a6f-7d36397f1fdb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:05:45.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8148" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3573,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:05:45.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 30 22:05:45.933: INFO: Waiting up to 5m0s for pod "pod-dc56a573-b6ad-4137-9553-916bb845efbf" in namespace "emptydir-9661" to be "success or failure" May 30 22:05:45.936: INFO: Pod "pod-dc56a573-b6ad-4137-9553-916bb845efbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.573628ms May 30 22:05:47.940: INFO: Pod "pod-dc56a573-b6ad-4137-9553-916bb845efbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007165026s May 30 22:05:49.945: INFO: Pod "pod-dc56a573-b6ad-4137-9553-916bb845efbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011254805s STEP: Saw pod success May 30 22:05:49.945: INFO: Pod "pod-dc56a573-b6ad-4137-9553-916bb845efbf" satisfied condition "success or failure" May 30 22:05:49.948: INFO: Trying to get logs from node jerma-worker2 pod pod-dc56a573-b6ad-4137-9553-916bb845efbf container test-container: STEP: delete the pod May 30 22:05:49.992: INFO: Waiting for pod pod-dc56a573-b6ad-4137-9553-916bb845efbf to disappear May 30 22:05:50.016: INFO: Pod pod-dc56a573-b6ad-4137-9553-916bb845efbf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:05:50.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9661" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3576,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:05:50.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:05:54.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9831" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3577,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:05:54.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 30 22:05:54.199: INFO: Waiting up to 5m0s for pod "pod-3c2b6be0-35a7-4a29-8d0f-d9fe0afa5da5" in namespace "emptydir-1841" to be "success or failure" May 30 22:05:54.221: INFO: Pod "pod-3c2b6be0-35a7-4a29-8d0f-d9fe0afa5da5": Phase="Pending", Reason="", readiness=false. Elapsed: 21.859134ms May 30 22:05:56.325: INFO: Pod "pod-3c2b6be0-35a7-4a29-8d0f-d9fe0afa5da5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126273169s May 30 22:05:58.330: INFO: Pod "pod-3c2b6be0-35a7-4a29-8d0f-d9fe0afa5da5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.130876075s STEP: Saw pod success May 30 22:05:58.330: INFO: Pod "pod-3c2b6be0-35a7-4a29-8d0f-d9fe0afa5da5" satisfied condition "success or failure" May 30 22:05:58.333: INFO: Trying to get logs from node jerma-worker pod pod-3c2b6be0-35a7-4a29-8d0f-d9fe0afa5da5 container test-container: STEP: delete the pod May 30 22:05:58.392: INFO: Waiting for pod pod-3c2b6be0-35a7-4a29-8d0f-d9fe0afa5da5 to disappear May 30 22:05:58.408: INFO: Pod pod-3c2b6be0-35a7-4a29-8d0f-d9fe0afa5da5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:05:58.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1841" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3583,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:05:58.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-58c80391-4d95-4100-b2c2-03f8d86fa9fe STEP: Creating a pod to test consume configMaps May 30 22:05:58.482: INFO: Waiting up to 5m0s for pod "pod-configmaps-ea19009d-a890-47d5-9bb3-04c53ab43f11" in namespace "configmap-7840" to be "success or failure" May 30 22:05:58.517: INFO: Pod "pod-configmaps-ea19009d-a890-47d5-9bb3-04c53ab43f11": Phase="Pending", Reason="", readiness=false. Elapsed: 35.015635ms May 30 22:06:00.520: INFO: Pod "pod-configmaps-ea19009d-a890-47d5-9bb3-04c53ab43f11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038183151s May 30 22:06:02.524: INFO: Pod "pod-configmaps-ea19009d-a890-47d5-9bb3-04c53ab43f11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041934943s STEP: Saw pod success May 30 22:06:02.524: INFO: Pod "pod-configmaps-ea19009d-a890-47d5-9bb3-04c53ab43f11" satisfied condition "success or failure" May 30 22:06:02.527: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-ea19009d-a890-47d5-9bb3-04c53ab43f11 container configmap-volume-test: STEP: delete the pod May 30 22:06:02.566: INFO: Waiting for pod pod-configmaps-ea19009d-a890-47d5-9bb3-04c53ab43f11 to disappear May 30 22:06:02.606: INFO: Pod pod-configmaps-ea19009d-a890-47d5-9bb3-04c53ab43f11 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:06:02.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7840" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3611,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:06:02.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 30 22:06:10.737: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 30 22:06:10.754: INFO: Pod pod-with-poststart-http-hook still exists May 30 22:06:12.755: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 30 22:06:12.759: INFO: Pod pod-with-poststart-http-hook still exists May 30 22:06:14.755: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 30 22:06:14.759: INFO: Pod pod-with-poststart-http-hook still exists May 30 22:06:16.755: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 30 22:06:16.768: INFO: Pod pod-with-poststart-http-hook still exists May 30 22:06:18.755: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 30 22:06:18.792: INFO: Pod pod-with-poststart-http-hook still exists May 30 22:06:20.754: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 30 22:06:20.758: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:06:20.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3349" for this suite. • [SLOW TEST:18.152 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3632,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:06:20.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-2e383490-0cd4-4699-a304-32e4a62d948a STEP: Creating a pod to test consume secrets May 30 22:06:20.827: INFO: Waiting up to 5m0s for pod "pod-secrets-8f665f1d-ff4d-4e67-9239-1d571ed4dd51" in namespace "secrets-102" to be "success or failure" May 30 22:06:20.873: INFO: Pod "pod-secrets-8f665f1d-ff4d-4e67-9239-1d571ed4dd51": Phase="Pending", Reason="", readiness=false. Elapsed: 46.318652ms May 30 22:06:22.878: INFO: Pod "pod-secrets-8f665f1d-ff4d-4e67-9239-1d571ed4dd51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051084158s May 30 22:06:24.882: INFO: Pod "pod-secrets-8f665f1d-ff4d-4e67-9239-1d571ed4dd51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054946783s STEP: Saw pod success May 30 22:06:24.882: INFO: Pod "pod-secrets-8f665f1d-ff4d-4e67-9239-1d571ed4dd51" satisfied condition "success or failure" May 30 22:06:24.884: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-8f665f1d-ff4d-4e67-9239-1d571ed4dd51 container secret-volume-test: STEP: delete the pod May 30 22:06:24.935: INFO: Waiting for pod pod-secrets-8f665f1d-ff4d-4e67-9239-1d571ed4dd51 to disappear May 30 22:06:24.938: INFO: Pod pod-secrets-8f665f1d-ff4d-4e67-9239-1d571ed4dd51 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:06:24.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-102" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3643,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:06:24.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4871 STEP: creating a selector STEP: Creating the service pods in kubernetes May 30 22:06:25.051: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 30 22:06:55.218: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.61 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4871 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 22:06:55.218: INFO: >>> kubeConfig: /root/.kube/config I0530 22:06:55.245459 6 log.go:172] (0xc001972dc0) (0xc000f2ac80) Create stream I0530 22:06:55.245501 6 log.go:172] (0xc001972dc0) (0xc000f2ac80) Stream added, broadcasting: 1 I0530 22:06:55.247579 6 log.go:172] (0xc001972dc0) Reply frame received for 1 I0530 22:06:55.247615 6 log.go:172] (0xc001972dc0) (0xc001dfac80) Create stream I0530 22:06:55.247627 6 log.go:172] (0xc001972dc0) (0xc001dfac80) Stream added, broadcasting: 3 I0530 22:06:55.248764 6 log.go:172] (0xc001972dc0) Reply frame received for 3 I0530 22:06:55.248801 6 log.go:172] (0xc001972dc0) (0xc000f2ad20) Create stream I0530 22:06:55.248817 6 log.go:172] (0xc001972dc0) (0xc000f2ad20) Stream added, broadcasting: 5 I0530 22:06:55.250165 6 log.go:172] (0xc001972dc0) Reply frame received for 5 I0530 22:06:56.350421 6 log.go:172] (0xc001972dc0) Data frame received for 5 I0530 22:06:56.350443 6 log.go:172] (0xc000f2ad20) (5) Data frame handling I0530 22:06:56.350502 6 log.go:172] (0xc001972dc0) Data frame received for 3 I0530 22:06:56.350542 6 log.go:172] (0xc001dfac80) (3) Data frame handling I0530 22:06:56.350567 6 log.go:172] (0xc001dfac80) (3) Data frame sent I0530 22:06:56.350589 6 log.go:172] (0xc001972dc0) Data frame received for 3 I0530 22:06:56.350613 6 log.go:172] (0xc001dfac80) (3) Data frame handling I0530 22:06:56.352991 6 log.go:172] (0xc001972dc0) Data frame received for 1 I0530 22:06:56.353040 6 log.go:172] (0xc000f2ac80) (1) Data frame handling I0530 22:06:56.353059 6 log.go:172] (0xc000f2ac80) (1) Data frame sent I0530 22:06:56.353195 6 log.go:172] (0xc001972dc0) (0xc000f2ac80) Stream removed, broadcasting: 1 I0530 22:06:56.353358 6 log.go:172] (0xc001972dc0) (0xc000f2ac80) Stream removed, broadcasting: 1 I0530 22:06:56.353385 6 log.go:172] (0xc001972dc0) (0xc001dfac80) Stream removed, broadcasting: 3 I0530 22:06:56.353489 6 log.go:172] (0xc001972dc0) Go away received I0530 22:06:56.353597 6 log.go:172] (0xc001972dc0) (0xc000f2ad20) Stream removed, broadcasting: 5 May 30 22:06:56.353: INFO: Found all expected endpoints: [netserver-0] May 30 22:06:56.356: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.102 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4871 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 22:06:56.356: INFO: >>> kubeConfig: /root/.kube/config I0530 22:06:56.384351 6 log.go:172] (0xc002f5cb00) (0xc001dfb040) Create stream I0530 22:06:56.384390 6 log.go:172] (0xc002f5cb00) (0xc001dfb040) Stream added, broadcasting: 1 I0530 22:06:56.387649 6 log.go:172] (0xc002f5cb00) Reply frame received for 1 I0530 22:06:56.387694 6 log.go:172] (0xc002f5cb00) (0xc001100280) Create stream I0530 22:06:56.387707 6 log.go:172] (0xc002f5cb00) (0xc001100280) Stream added, broadcasting: 3 I0530 22:06:56.388724 6 log.go:172] (0xc002f5cb00) Reply frame received for 3 I0530 22:06:56.388804 6 log.go:172] (0xc002f5cb00) (0xc001dfb220) Create stream I0530 22:06:56.388837 6 log.go:172] (0xc002f5cb00) (0xc001dfb220) Stream added, broadcasting: 5 I0530 22:06:56.389840 6 log.go:172] (0xc002f5cb00) Reply frame received for 5 I0530 22:06:57.470440 6 log.go:172] (0xc002f5cb00) Data frame received for 3 I0530 22:06:57.470472 6 log.go:172] (0xc001100280) (3) Data frame handling I0530 22:06:57.470491 6 log.go:172] (0xc001100280) (3) Data frame sent I0530 22:06:57.470506 6 log.go:172] (0xc002f5cb00) Data frame received for 3 I0530 22:06:57.470516 6 log.go:172] (0xc001100280) (3) Data frame handling I0530 22:06:57.470782 6 log.go:172] (0xc002f5cb00) Data frame received for 5 I0530 22:06:57.470815 6 log.go:172] (0xc001dfb220) (5) Data frame handling I0530 22:06:57.472665 6 log.go:172] (0xc002f5cb00) Data frame received for 1 I0530 22:06:57.472714 6 log.go:172] (0xc001dfb040) (1) Data frame handling I0530 22:06:57.472778 6 log.go:172] (0xc001dfb040) (1) Data frame sent I0530 22:06:57.472811 6 log.go:172] (0xc002f5cb00) (0xc001dfb040) Stream removed, broadcasting: 1 I0530 22:06:57.472927 6 log.go:172] (0xc002f5cb00) (0xc001dfb040) Stream removed, broadcasting: 1 I0530 22:06:57.472967 6 log.go:172] (0xc002f5cb00) (0xc001100280) Stream removed, broadcasting: 3 I0530 22:06:57.473011 6 log.go:172] (0xc002f5cb00) (0xc001dfb220) Stream removed, broadcasting: 5 May 30 22:06:57.473: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:06:57.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0530 22:06:57.473482 6 log.go:172] (0xc002f5cb00) Go away received STEP: Destroying namespace "pod-network-test-4871" for this suite. • [SLOW TEST:32.536 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3697,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:06:57.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-6919 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6919 STEP: creating replication controller externalsvc in namespace services-6919 I0530 22:06:57.739602 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6919, replica count: 2 I0530 22:07:00.790093 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 22:07:03.790319 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 30 22:07:04.167: INFO: Creating new exec pod May 30 22:07:08.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6919 execpodpw82q -- /bin/sh -x -c nslookup nodeport-service' May 30 22:07:08.775: INFO: stderr: "I0530 22:07:08.561452 3803 log.go:172] (0xc000b41340) (0xc000a08500) Create stream\nI0530 22:07:08.561510 3803 log.go:172] (0xc000b41340) (0xc000a08500) Stream added, broadcasting: 1\nI0530 22:07:08.566351 3803 log.go:172] (0xc000b41340) Reply frame received for 1\nI0530 22:07:08.566399 3803 log.go:172] (0xc000b41340) (0xc000a08000) Create stream\nI0530 22:07:08.566412 3803 log.go:172] (0xc000b41340) (0xc000a08000) Stream added, broadcasting: 3\nI0530 22:07:08.567487 3803 log.go:172] (0xc000b41340) Reply frame received for 3\nI0530 22:07:08.567517 3803 log.go:172] (0xc000b41340) (0xc0009ea000) Create stream\nI0530 22:07:08.567528 3803 log.go:172] (0xc000b41340) (0xc0009ea000) Stream added, broadcasting: 5\nI0530 22:07:08.568349 3803 log.go:172] (0xc000b41340) Reply frame received for 5\nI0530 22:07:08.675579 3803 log.go:172] (0xc000b41340) Data frame received for 5\nI0530 22:07:08.675607 3803 log.go:172] (0xc0009ea000) (5) Data frame handling\nI0530 22:07:08.675626 3803 log.go:172] (0xc0009ea000) (5) Data frame sent\n+ nslookup nodeport-service\nI0530 22:07:08.763918 3803 log.go:172] (0xc000b41340) Data frame received for 3\nI0530 22:07:08.763952 3803 log.go:172] (0xc000a08000) (3) Data frame handling\nI0530 22:07:08.763974 3803 log.go:172] (0xc000a08000) (3) Data frame sent\nI0530 22:07:08.765541 3803 log.go:172] (0xc000b41340) Data frame received for 3\nI0530 22:07:08.765570 3803 log.go:172] (0xc000a08000) (3) Data frame handling\nI0530 22:07:08.765597 3803 log.go:172] (0xc000a08000) (3) Data frame sent\nI0530 22:07:08.766498 3803 log.go:172] (0xc000b41340) Data frame received for 5\nI0530 22:07:08.766525 3803 log.go:172] (0xc0009ea000) (5) Data frame handling\nI0530 22:07:08.766582 3803 log.go:172] (0xc000b41340) Data frame received for 3\nI0530 22:07:08.766611 3803 log.go:172] (0xc000a08000) (3) Data frame handling\nI0530 22:07:08.768495 3803 log.go:172] (0xc000b41340) Data frame received for 1\nI0530 22:07:08.768514 3803 log.go:172] (0xc000a08500) (1) Data frame handling\nI0530 22:07:08.768531 3803 log.go:172] (0xc000a08500) (1) Data frame sent\nI0530 22:07:08.768546 3803 log.go:172] (0xc000b41340) (0xc000a08500) Stream removed, broadcasting: 1\nI0530 22:07:08.768569 3803 log.go:172] (0xc000b41340) Go away received\nI0530 22:07:08.768974 3803 log.go:172] (0xc000b41340) (0xc000a08500) Stream removed, broadcasting: 1\nI0530 22:07:08.768998 3803 log.go:172] (0xc000b41340) (0xc000a08000) Stream removed, broadcasting: 3\nI0530 22:07:08.769009 3803 log.go:172] (0xc000b41340) (0xc0009ea000) Stream removed, broadcasting: 5\n" May 30 22:07:08.776: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-6919.svc.cluster.local\tcanonical name = externalsvc.services-6919.svc.cluster.local.\nName:\texternalsvc.services-6919.svc.cluster.local\nAddress: 10.107.76.47\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6919, will wait for the garbage collector to delete the pods May 30 22:07:08.855: INFO: Deleting ReplicationController externalsvc took: 6.402353ms May 30 22:07:09.256: INFO: Terminating ReplicationController externalsvc pods took: 400.228114ms May 30 22:07:13.991: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:07:14.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6919" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:16.568 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":220,"skipped":3701,"failed":0} [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:07:14.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 30 22:07:14.118: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1426 /api/v1/namespaces/watch-1426/configmaps/e2e-watch-test-watch-closed 337f724a-b186-46d3-94c1-81d53d62814b 20443261 0 2020-05-30 22:07:14 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 30 22:07:14.118: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1426 /api/v1/namespaces/watch-1426/configmaps/e2e-watch-test-watch-closed 337f724a-b186-46d3-94c1-81d53d62814b 20443262 0 2020-05-30 22:07:14 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 30 22:07:14.148: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1426 /api/v1/namespaces/watch-1426/configmaps/e2e-watch-test-watch-closed 337f724a-b186-46d3-94c1-81d53d62814b 20443263 0 2020-05-30 22:07:14 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 30 22:07:14.148: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1426 /api/v1/namespaces/watch-1426/configmaps/e2e-watch-test-watch-closed 337f724a-b186-46d3-94c1-81d53d62814b 20443264 0 2020-05-30 22:07:14 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:07:14.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1426" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":221,"skipped":3701,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:07:14.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-bf08f8a1-d768-42f0-b858-cd47b4d8dc1b in namespace container-probe-169 May 30 22:07:18.294: INFO: Started pod busybox-bf08f8a1-d768-42f0-b858-cd47b4d8dc1b in namespace container-probe-169 STEP: checking the pod's current state and verifying that restartCount is present May 30 22:07:18.298: INFO: Initial restart count of pod busybox-bf08f8a1-d768-42f0-b858-cd47b4d8dc1b is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:11:18.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-169" for this suite. • [SLOW TEST:244.796 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3710,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:11:18.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9114.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9114.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9114.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9114.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9114.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9114.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 30 22:11:27.164: INFO: DNS probes using dns-9114/dns-test-841271a0-c514-4e52-979b-7c66e21be063 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:11:27.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9114" for this suite. • [SLOW TEST:8.302 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":223,"skipped":3718,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:11:27.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info May 30 22:11:27.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 30 22:11:27.822: INFO: stderr: "" May 30 22:11:27.822: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:11:27.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3198" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":224,"skipped":3730,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:11:27.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 30 22:11:27.866: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 30 22:11:27.884: INFO: Waiting for terminating namespaces to be deleted... May 30 22:11:27.886: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 30 22:11:27.904: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 22:11:27.904: INFO: Container kube-proxy ready: true, restart count 0 May 30 22:11:27.904: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 22:11:27.904: INFO: Container kindnet-cni ready: true, restart count 2 May 30 22:11:27.905: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 30 22:11:27.926: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 30 22:11:27.926: INFO: Container kube-hunter ready: false, restart count 0 May 30 22:11:27.926: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 22:11:27.926: INFO: Container kindnet-cni ready: true, restart count 2 May 30 22:11:27.926: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 30 22:11:27.926: INFO: Container kube-bench ready: false, restart count 0 May 30 22:11:27.926: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 22:11:27.926: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d73d6d83-9f72-4ccc-8284-354911f87f09 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-d73d6d83-9f72-4ccc-8284-354911f87f09 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-d73d6d83-9f72-4ccc-8284-354911f87f09 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:11:46.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5240" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:18.352 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":225,"skipped":3738,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:11:46.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 22:11:46.920: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 22:11:49.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473506, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473506, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473507, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473506, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 22:11:51.043: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473506, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473506, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473507, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473506, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 22:11:54.271: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 30 22:11:54.286: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:11:54.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5720" for this suite. STEP: Destroying namespace "webhook-5720-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.267 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":226,"skipped":3749,"failed":0} S ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:11:54.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 30 22:11:58.661: INFO: &Pod{ObjectMeta:{send-events-c652a5ac-5288-434c-8529-0c7c12c9c243 events-64 /api/v1/namespaces/events-64/pods/send-events-c652a5ac-5288-434c-8529-0c7c12c9c243 3a7fa927-0aab-42d1-8657-dbb6aaeab709 20444315 0 2020-05-30 22:11:54 +0000 UTC map[name:foo time:556139937] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-krpt7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-krpt7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-krpt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:11:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:11:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:11:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:11:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.65,StartTime:2020-05-30 22:11:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 22:11:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://6e10c86eb8d394b3b663f2edd3c90a6f43bc7ed7895632f9bdfdefabf0868f45,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 30 22:12:00.666: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 30 22:12:02.671: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:12:02.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-64" for this suite. • [SLOW TEST:8.237 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":227,"skipped":3750,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:12:02.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6184 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-6184 I0530 22:12:02.922322 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6184, replica count: 2 I0530 22:12:05.972786 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 22:12:08.973056 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 30 22:12:08.973: INFO: Creating new exec pod May 30 22:12:14.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6184 execpodxl25s -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 30 22:12:14.261: INFO: stderr: "I0530 22:12:14.160645 3847 log.go:172] (0xc0000f5130) (0xc000665ea0) Create stream\nI0530 22:12:14.160724 3847 log.go:172] (0xc0000f5130) (0xc000665ea0) Stream added, broadcasting: 1\nI0530 22:12:14.163494 3847 log.go:172] (0xc0000f5130) Reply frame received for 1\nI0530 22:12:14.163545 3847 log.go:172] (0xc0000f5130) (0xc000574780) Create stream\nI0530 22:12:14.163558 3847 log.go:172] (0xc0000f5130) (0xc000574780) Stream added, broadcasting: 3\nI0530 22:12:14.164403 3847 log.go:172] (0xc0000f5130) Reply frame received for 3\nI0530 22:12:14.164429 3847 log.go:172] (0xc0000f5130) (0xc0007b7540) Create stream\nI0530 22:12:14.164440 3847 log.go:172] (0xc0000f5130) (0xc0007b7540) Stream added, broadcasting: 5\nI0530 22:12:14.165599 3847 log.go:172] (0xc0000f5130) Reply frame received for 5\nI0530 22:12:14.237263 3847 log.go:172] (0xc0000f5130) Data frame received for 5\nI0530 22:12:14.237313 3847 log.go:172] (0xc0007b7540) (5) Data frame handling\nI0530 22:12:14.237350 3847 log.go:172] (0xc0007b7540) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0530 22:12:14.252364 3847 log.go:172] (0xc0000f5130) Data frame received for 5\nI0530 22:12:14.252391 3847 log.go:172] (0xc0007b7540) (5) Data frame handling\nI0530 22:12:14.252408 3847 log.go:172] (0xc0007b7540) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0530 22:12:14.252850 3847 log.go:172] (0xc0000f5130) Data frame received for 5\nI0530 22:12:14.252883 3847 log.go:172] (0xc0007b7540) (5) Data frame handling\nI0530 22:12:14.253014 3847 log.go:172] (0xc0000f5130) Data frame received for 3\nI0530 22:12:14.253030 3847 log.go:172] (0xc000574780) (3) Data frame handling\nI0530 22:12:14.255100 3847 log.go:172] (0xc0000f5130) Data frame received for 1\nI0530 22:12:14.255113 3847 log.go:172] (0xc000665ea0) (1) Data frame handling\nI0530 22:12:14.255120 3847 log.go:172] (0xc000665ea0) (1) Data frame sent\nI0530 22:12:14.255134 3847 log.go:172] (0xc0000f5130) (0xc000665ea0) Stream removed, broadcasting: 1\nI0530 22:12:14.255149 3847 log.go:172] (0xc0000f5130) Go away received\nI0530 22:12:14.255503 3847 log.go:172] (0xc0000f5130) (0xc000665ea0) Stream removed, broadcasting: 1\nI0530 22:12:14.255528 3847 log.go:172] (0xc0000f5130) (0xc000574780) Stream removed, broadcasting: 3\nI0530 22:12:14.255541 3847 log.go:172] (0xc0000f5130) (0xc0007b7540) Stream removed, broadcasting: 5\n" May 30 22:12:14.261: INFO: stdout: "" May 30 22:12:14.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6184 execpodxl25s -- /bin/sh -x -c nc -zv -t -w 2 10.111.13.150 80' May 30 22:12:14.490: INFO: stderr: "I0530 22:12:14.414686 3867 log.go:172] (0xc000ad0c60) (0xc0008f8460) Create stream\nI0530 22:12:14.414747 3867 log.go:172] (0xc000ad0c60) (0xc0008f8460) Stream added, broadcasting: 1\nI0530 22:12:14.420201 3867 log.go:172] (0xc000ad0c60) Reply frame received for 1\nI0530 22:12:14.420249 3867 log.go:172] (0xc000ad0c60) (0xc00081bcc0) Create stream\nI0530 22:12:14.420260 3867 log.go:172] (0xc000ad0c60) (0xc00081bcc0) Stream added, broadcasting: 3\nI0530 22:12:14.421517 3867 log.go:172] (0xc000ad0c60) Reply frame received for 3\nI0530 22:12:14.421566 3867 log.go:172] (0xc000ad0c60) (0xc0006f28c0) Create stream\nI0530 22:12:14.421582 3867 log.go:172] (0xc000ad0c60) (0xc0006f28c0) Stream added, broadcasting: 5\nI0530 22:12:14.422455 3867 log.go:172] (0xc000ad0c60) Reply frame received for 5\nI0530 22:12:14.483311 3867 log.go:172] (0xc000ad0c60) Data frame received for 5\nI0530 22:12:14.483370 3867 log.go:172] (0xc0006f28c0) (5) Data frame handling\nI0530 22:12:14.483398 3867 log.go:172] (0xc0006f28c0) (5) Data frame sent\nI0530 22:12:14.483417 3867 log.go:172] (0xc000ad0c60) Data frame received for 5\nI0530 22:12:14.483433 3867 log.go:172] (0xc0006f28c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.13.150 80\nConnection to 10.111.13.150 80 port [tcp/http] succeeded!\nI0530 22:12:14.483471 3867 log.go:172] (0xc000ad0c60) Data frame received for 3\nI0530 22:12:14.483515 3867 log.go:172] (0xc00081bcc0) (3) Data frame handling\nI0530 22:12:14.485332 3867 log.go:172] (0xc000ad0c60) Data frame received for 1\nI0530 22:12:14.485368 3867 log.go:172] (0xc0008f8460) (1) Data frame handling\nI0530 22:12:14.485390 3867 log.go:172] (0xc0008f8460) (1) Data frame sent\nI0530 22:12:14.485409 3867 log.go:172] (0xc000ad0c60) (0xc0008f8460) Stream removed, broadcasting: 1\nI0530 22:12:14.485441 3867 log.go:172] (0xc000ad0c60) Go away received\nI0530 22:12:14.485860 3867 log.go:172] (0xc000ad0c60) (0xc0008f8460) Stream removed, broadcasting: 1\nI0530 22:12:14.485882 3867 log.go:172] (0xc000ad0c60) (0xc00081bcc0) Stream removed, broadcasting: 3\nI0530 22:12:14.485894 3867 log.go:172] (0xc000ad0c60) (0xc0006f28c0) Stream removed, broadcasting: 5\n" May 30 22:12:14.490: INFO: stdout: "" May 30 22:12:14.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6184 execpodxl25s -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30321' May 30 22:12:14.715: INFO: stderr: "I0530 22:12:14.615225 3887 log.go:172] (0xc0000f4e70) (0xc0007a0000) Create stream\nI0530 22:12:14.615280 3887 log.go:172] (0xc0000f4e70) (0xc0007a0000) Stream added, broadcasting: 1\nI0530 22:12:14.618696 3887 log.go:172] (0xc0000f4e70) Reply frame received for 1\nI0530 22:12:14.618756 3887 log.go:172] (0xc0000f4e70) (0xc0006cdb80) Create stream\nI0530 22:12:14.618791 3887 log.go:172] (0xc0000f4e70) (0xc0006cdb80) Stream added, broadcasting: 3\nI0530 22:12:14.619873 3887 log.go:172] (0xc0000f4e70) Reply frame received for 3\nI0530 22:12:14.619937 3887 log.go:172] (0xc0000f4e70) (0xc0006cdd60) Create stream\nI0530 22:12:14.619960 3887 log.go:172] (0xc0000f4e70) (0xc0006cdd60) Stream added, broadcasting: 5\nI0530 22:12:14.621013 3887 log.go:172] (0xc0000f4e70) Reply frame received for 5\nI0530 22:12:14.703972 3887 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0530 22:12:14.704012 3887 log.go:172] (0xc0006cdd60) (5) Data frame handling\nI0530 22:12:14.704042 3887 log.go:172] (0xc0006cdd60) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 30321\nI0530 22:12:14.706605 3887 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0530 22:12:14.706635 3887 log.go:172] (0xc0006cdd60) (5) Data frame handling\nI0530 22:12:14.706660 3887 log.go:172] (0xc0006cdd60) (5) Data frame sent\nConnection to 172.17.0.10 30321 port [tcp/30321] succeeded!\nI0530 22:12:14.706759 3887 log.go:172] (0xc0000f4e70) Data frame received for 3\nI0530 22:12:14.706772 3887 log.go:172] (0xc0006cdb80) (3) Data frame handling\nI0530 22:12:14.707028 3887 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0530 22:12:14.707051 3887 log.go:172] (0xc0006cdd60) (5) Data frame handling\nI0530 22:12:14.708800 3887 log.go:172] (0xc0000f4e70) Data frame received for 1\nI0530 22:12:14.708878 3887 log.go:172] (0xc0007a0000) (1) Data frame handling\nI0530 22:12:14.708905 3887 log.go:172] (0xc0007a0000) (1) Data frame sent\nI0530 22:12:14.708922 3887 log.go:172] (0xc0000f4e70) (0xc0007a0000) Stream removed, broadcasting: 1\nI0530 22:12:14.708952 3887 log.go:172] (0xc0000f4e70) Go away received\nI0530 22:12:14.709562 3887 log.go:172] (0xc0000f4e70) (0xc0007a0000) Stream removed, broadcasting: 1\nI0530 22:12:14.709599 3887 log.go:172] (0xc0000f4e70) (0xc0006cdb80) Stream removed, broadcasting: 3\nI0530 22:12:14.709624 3887 log.go:172] (0xc0000f4e70) (0xc0006cdd60) Stream removed, broadcasting: 5\n" May 30 22:12:14.715: INFO: stdout: "" May 30 22:12:14.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6184 execpodxl25s -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30321' May 30 22:12:14.921: INFO: stderr: "I0530 22:12:14.844041 3910 log.go:172] (0xc0007b69a0) (0xc0004cdc20) Create stream\nI0530 22:12:14.844101 3910 log.go:172] (0xc0007b69a0) (0xc0004cdc20) Stream added, broadcasting: 1\nI0530 22:12:14.847372 3910 log.go:172] (0xc0007b69a0) Reply frame received for 1\nI0530 22:12:14.847423 3910 log.go:172] (0xc0007b69a0) (0xc00085a0a0) Create stream\nI0530 22:12:14.847437 3910 log.go:172] (0xc0007b69a0) (0xc00085a0a0) Stream added, broadcasting: 3\nI0530 22:12:14.848659 3910 log.go:172] (0xc0007b69a0) Reply frame received for 3\nI0530 22:12:14.848706 3910 log.go:172] (0xc0007b69a0) (0xc0004cdcc0) Create stream\nI0530 22:12:14.848722 3910 log.go:172] (0xc0007b69a0) (0xc0004cdcc0) Stream added, broadcasting: 5\nI0530 22:12:14.849892 3910 log.go:172] (0xc0007b69a0) Reply frame received for 5\nI0530 22:12:14.912971 3910 log.go:172] (0xc0007b69a0) Data frame received for 3\nI0530 22:12:14.913021 3910 log.go:172] (0xc00085a0a0) (3) Data frame handling\nI0530 22:12:14.913053 3910 log.go:172] (0xc0007b69a0) Data frame received for 5\nI0530 22:12:14.913078 3910 log.go:172] (0xc0004cdcc0) (5) Data frame handling\nI0530 22:12:14.913104 3910 log.go:172] (0xc0004cdcc0) (5) Data frame sent\nI0530 22:12:14.913315 3910 log.go:172] (0xc0007b69a0) Data frame received for 5\nI0530 22:12:14.913335 3910 log.go:172] (0xc0004cdcc0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30321\nConnection to 172.17.0.8 30321 port [tcp/30321] succeeded!\nI0530 22:12:14.914686 3910 log.go:172] (0xc0007b69a0) Data frame received for 1\nI0530 22:12:14.914712 3910 log.go:172] (0xc0004cdc20) (1) Data frame handling\nI0530 22:12:14.914737 3910 log.go:172] (0xc0004cdc20) (1) Data frame sent\nI0530 22:12:14.914766 3910 log.go:172] (0xc0007b69a0) (0xc0004cdc20) Stream removed, broadcasting: 1\nI0530 22:12:14.914787 3910 log.go:172] (0xc0007b69a0) Go away received\nI0530 22:12:14.915192 3910 log.go:172] (0xc0007b69a0) (0xc0004cdc20) Stream removed, broadcasting: 1\nI0530 22:12:14.915225 3910 log.go:172] (0xc0007b69a0) (0xc00085a0a0) Stream removed, broadcasting: 3\nI0530 22:12:14.915237 3910 log.go:172] (0xc0007b69a0) (0xc0004cdcc0) Stream removed, broadcasting: 5\n" May 30 22:12:14.921: INFO: stdout: "" May 30 22:12:14.921: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:12:15.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6184" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.345 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":228,"skipped":3765,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:12:15.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 22:12:15.755: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 22:12:17.766: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473535, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473535, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473535, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473535, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 22:12:20.827: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 30 22:12:25.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-4053 to-be-attached-pod -i -c=container1' May 30 22:12:25.129: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:12:25.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4053" for this suite. STEP: Destroying namespace "webhook-4053-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.242 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":229,"skipped":3767,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:12:25.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3528 STEP: creating a selector STEP: Creating the service pods in kubernetes May 30 22:12:25.545: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 30 22:12:47.842: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.69:8080/dial?request=hostname&protocol=http&host=10.244.1.68&port=8080&tries=1'] Namespace:pod-network-test-3528 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 22:12:47.842: INFO: >>> kubeConfig: /root/.kube/config I0530 22:12:47.871467 6 log.go:172] (0xc002f5d8c0) (0xc001dfa960) Create stream I0530 22:12:47.871491 6 log.go:172] (0xc002f5d8c0) (0xc001dfa960) Stream added, broadcasting: 1 I0530 22:12:47.873547 6 log.go:172] (0xc002f5d8c0) Reply frame received for 1 I0530 22:12:47.873588 6 log.go:172] (0xc002f5d8c0) (0xc001100140) Create stream I0530 22:12:47.873602 6 log.go:172] (0xc002f5d8c0) (0xc001100140) Stream added, broadcasting: 3 I0530 22:12:47.874721 6 log.go:172] (0xc002f5d8c0) Reply frame received for 3 I0530 22:12:47.874788 6 log.go:172] (0xc002f5d8c0) (0xc00173f7c0) Create stream I0530 22:12:47.874805 6 log.go:172] (0xc002f5d8c0) (0xc00173f7c0) Stream added, broadcasting: 5 I0530 22:12:47.875835 6 log.go:172] (0xc002f5d8c0) Reply frame received for 5 I0530 22:12:47.989967 6 log.go:172] (0xc002f5d8c0) Data frame received for 3 I0530 22:12:47.990008 6 log.go:172] (0xc001100140) (3) Data frame handling I0530 22:12:47.990029 6 log.go:172] (0xc001100140) (3) Data frame sent I0530 22:12:47.990912 6 log.go:172] (0xc002f5d8c0) Data frame received for 5 I0530 22:12:47.990944 6 log.go:172] (0xc00173f7c0) (5) Data frame handling I0530 22:12:47.990989 6 log.go:172] (0xc002f5d8c0) Data frame received for 3 I0530 22:12:47.991020 6 log.go:172] (0xc001100140) (3) Data frame handling I0530 22:12:47.993020 6 log.go:172] (0xc002f5d8c0) Data frame received for 1 I0530 22:12:47.993057 6 log.go:172] (0xc001dfa960) (1) Data frame handling I0530 22:12:47.993091 6 log.go:172] (0xc001dfa960) (1) Data frame sent I0530 22:12:47.993265 6 log.go:172] (0xc002f5d8c0) (0xc001dfa960) Stream removed, broadcasting: 1 I0530 22:12:47.993406 6 log.go:172] (0xc002f5d8c0) (0xc001dfa960) Stream removed, broadcasting: 1 I0530 22:12:47.993418 6 log.go:172] (0xc002f5d8c0) (0xc001100140) Stream removed, broadcasting: 3 I0530 22:12:47.993521 6 log.go:172] (0xc002f5d8c0) (0xc00173f7c0) Stream removed, broadcasting: 5 May 30 22:12:47.993: INFO: Waiting for responses: map[] I0530 22:12:47.993611 6 log.go:172] (0xc002f5d8c0) Go away received May 30 22:12:47.996: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.69:8080/dial?request=hostname&protocol=http&host=10.244.2.114&port=8080&tries=1'] Namespace:pod-network-test-3528 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 22:12:47.996: INFO: >>> kubeConfig: /root/.kube/config I0530 22:12:48.031504 6 log.go:172] (0xc00293c6e0) (0xc002350280) Create stream I0530 22:12:48.031533 6 log.go:172] (0xc00293c6e0) (0xc002350280) Stream added, broadcasting: 1 I0530 22:12:48.033424 6 log.go:172] (0xc00293c6e0) Reply frame received for 1 I0530 22:12:48.033474 6 log.go:172] (0xc00293c6e0) (0xc000f2b2c0) Create stream I0530 22:12:48.033488 6 log.go:172] (0xc00293c6e0) (0xc000f2b2c0) Stream added, broadcasting: 3 I0530 22:12:48.034506 6 log.go:172] (0xc00293c6e0) Reply frame received for 3 I0530 22:12:48.034545 6 log.go:172] (0xc00293c6e0) (0xc000f2b680) Create stream I0530 22:12:48.034567 6 log.go:172] (0xc00293c6e0) (0xc000f2b680) Stream added, broadcasting: 5 I0530 22:12:48.035483 6 log.go:172] (0xc00293c6e0) Reply frame received for 5 I0530 22:12:48.110979 6 log.go:172] (0xc00293c6e0) Data frame received for 3 I0530 22:12:48.111004 6 log.go:172] (0xc000f2b2c0) (3) Data frame handling I0530 22:12:48.111019 6 log.go:172] (0xc000f2b2c0) (3) Data frame sent I0530 22:12:48.111637 6 log.go:172] (0xc00293c6e0) Data frame received for 3 I0530 22:12:48.111658 6 log.go:172] (0xc000f2b2c0) (3) Data frame handling I0530 22:12:48.111854 6 log.go:172] (0xc00293c6e0) Data frame received for 5 I0530 22:12:48.111873 6 log.go:172] (0xc000f2b680) (5) Data frame handling I0530 22:12:48.113603 6 log.go:172] (0xc00293c6e0) Data frame received for 1 I0530 22:12:48.113638 6 log.go:172] (0xc002350280) (1) Data frame handling I0530 22:12:48.113681 6 log.go:172] (0xc002350280) (1) Data frame sent I0530 22:12:48.113713 6 log.go:172] (0xc00293c6e0) (0xc002350280) Stream removed, broadcasting: 1 I0530 22:12:48.113742 6 log.go:172] (0xc00293c6e0) Go away received I0530 22:12:48.113816 6 log.go:172] (0xc00293c6e0) (0xc002350280) Stream removed, broadcasting: 1 I0530 22:12:48.113842 6 log.go:172] (0xc00293c6e0) (0xc000f2b2c0) Stream removed, broadcasting: 3 I0530 22:12:48.113875 6 log.go:172] (0xc00293c6e0) (0xc000f2b680) Stream removed, broadcasting: 5 May 30 22:12:48.113: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:12:48.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3528" for this suite. • [SLOW TEST:22.846 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3769,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:12:48.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-4883dd70-40a8-4131-9a34-f0f3a6af8305 STEP: Creating secret with name s-test-opt-upd-a99e7237-0fbb-42b9-a863-4df62b48a932 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-4883dd70-40a8-4131-9a34-f0f3a6af8305 STEP: Updating secret s-test-opt-upd-a99e7237-0fbb-42b9-a863-4df62b48a932 STEP: Creating secret with name s-test-opt-create-ceae9621-b4b0-48d1-98af-e10c80d8ca6a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:12:56.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1358" for this suite. • [SLOW TEST:8.445 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3779,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:12:56.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7763.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7763.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 30 22:13:04.742: INFO: DNS probes using dns-7763/dns-test-edebbf48-5975-4c81-bb7f-aaa5156e002e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:13:04.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7763" for this suite. • [SLOW TEST:8.285 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":232,"skipped":3790,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:13:04.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 30 22:13:05.251: INFO: Waiting up to 5m0s for pod "downwardapi-volume-579ac96f-35e6-4e20-9ffc-33307f11a10c" in namespace "projected-5338" to be "success or failure" May 30 22:13:05.259: INFO: Pod "downwardapi-volume-579ac96f-35e6-4e20-9ffc-33307f11a10c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083733ms May 30 22:13:07.314: INFO: Pod "downwardapi-volume-579ac96f-35e6-4e20-9ffc-33307f11a10c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063213572s May 30 22:13:09.318: INFO: Pod "downwardapi-volume-579ac96f-35e6-4e20-9ffc-33307f11a10c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06705864s STEP: Saw pod success May 30 22:13:09.318: INFO: Pod "downwardapi-volume-579ac96f-35e6-4e20-9ffc-33307f11a10c" satisfied condition "success or failure" May 30 22:13:09.349: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-579ac96f-35e6-4e20-9ffc-33307f11a10c container client-container: STEP: delete the pod May 30 22:13:09.410: INFO: Waiting for pod downwardapi-volume-579ac96f-35e6-4e20-9ffc-33307f11a10c to disappear May 30 22:13:09.425: INFO: Pod downwardapi-volume-579ac96f-35e6-4e20-9ffc-33307f11a10c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:13:09.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5338" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3801,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:13:09.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 30 22:13:14.087: INFO: Successfully updated pod "labelsupdate6283ebe6-a67f-4de3-934a-c757afd1f332" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:13:18.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3656" for this suite. • [SLOW TEST:8.703 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3809,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:13:18.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod May 30 22:13:18.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5355' May 30 22:13:25.108: INFO: stderr: "" May 30 22:13:25.108: INFO: stdout: "pod/pause created\n" May 30 22:13:25.108: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 30 22:13:25.108: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5355" to be "running and ready" May 30 22:13:25.116: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.338766ms May 30 22:13:27.120: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011684756s May 30 22:13:29.124: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.016148957s May 30 22:13:29.124: INFO: Pod "pause" satisfied condition "running and ready" May 30 22:13:29.124: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod May 30 22:13:29.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5355' May 30 22:13:29.247: INFO: stderr: "" May 30 22:13:29.247: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 30 22:13:29.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5355' May 30 22:13:29.338: INFO: stderr: "" May 30 22:13:29.338: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 30 22:13:29.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5355' May 30 22:13:29.424: INFO: stderr: "" May 30 22:13:29.424: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 30 22:13:29.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5355' May 30 22:13:29.518: INFO: stderr: "" May 30 22:13:29.518: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources May 30 22:13:29.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5355' May 30 22:13:29.649: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 30 22:13:29.649: INFO: stdout: "pod \"pause\" force deleted\n" May 30 22:13:29.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5355' May 30 22:13:29.839: INFO: stderr: "No resources found in kubectl-5355 namespace.\n" May 30 22:13:29.839: INFO: stdout: "" May 30 22:13:29.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5355 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 30 22:13:29.987: INFO: stderr: "" May 30 22:13:29.987: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:13:29.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5355" for this suite. • [SLOW TEST:11.871 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":235,"skipped":3867,"failed":0} [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:13:30.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 30 22:13:30.219: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:30.248: INFO: Number of nodes with available pods: 0 May 30 22:13:30.248: INFO: Node jerma-worker is running more than one daemon pod May 30 22:13:31.253: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:31.256: INFO: Number of nodes with available pods: 0 May 30 22:13:31.256: INFO: Node jerma-worker is running more than one daemon pod May 30 22:13:32.422: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:32.770: INFO: Number of nodes with available pods: 0 May 30 22:13:32.770: INFO: Node jerma-worker is running more than one daemon pod May 30 22:13:33.252: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:33.254: INFO: Number of nodes with available pods: 0 May 30 22:13:33.254: INFO: Node jerma-worker is running more than one daemon pod May 30 22:13:34.268: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:34.284: INFO: Number of nodes with available pods: 0 May 30 22:13:34.284: INFO: Node jerma-worker is running more than one daemon pod May 30 22:13:35.251: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:35.259: INFO: Number of nodes with available pods: 2 May 30 22:13:35.259: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 30 22:13:35.309: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:35.313: INFO: Number of nodes with available pods: 1 May 30 22:13:35.313: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:13:36.327: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:36.359: INFO: Number of nodes with available pods: 1 May 30 22:13:36.359: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:13:37.318: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:37.321: INFO: Number of nodes with available pods: 1 May 30 22:13:37.321: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:13:38.317: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:38.322: INFO: Number of nodes with available pods: 1 May 30 22:13:38.322: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:13:39.318: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:39.321: INFO: Number of nodes with available pods: 1 May 30 22:13:39.321: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:13:40.318: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:40.321: INFO: Number of nodes with available pods: 1 May 30 22:13:40.321: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:13:41.318: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:41.322: INFO: Number of nodes with available pods: 1 May 30 22:13:41.322: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:13:42.318: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:42.322: INFO: Number of nodes with available pods: 1 May 30 22:13:42.322: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:13:43.318: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:43.321: INFO: Number of nodes with available pods: 1 May 30 22:13:43.321: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:13:44.318: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:44.322: INFO: Number of nodes with available pods: 1 May 30 22:13:44.322: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:13:45.318: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:45.322: INFO: Number of nodes with available pods: 1 May 30 22:13:45.322: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:13:46.317: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:46.320: INFO: Number of nodes with available pods: 1 May 30 22:13:46.320: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:13:47.317: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:47.321: INFO: Number of nodes with available pods: 1 May 30 22:13:47.321: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:13:48.318: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:48.321: INFO: Number of nodes with available pods: 1 May 30 22:13:48.321: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:13:49.317: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:49.320: INFO: Number of nodes with available pods: 1 May 30 22:13:49.320: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:13:50.319: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:50.323: INFO: Number of nodes with available pods: 1 May 30 22:13:50.323: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:13:51.318: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:51.322: INFO: Number of nodes with available pods: 1 May 30 22:13:51.322: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:13:52.332: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:52.336: INFO: Number of nodes with available pods: 1 May 30 22:13:52.336: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:13:53.318: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:13:53.322: INFO: Number of nodes with available pods: 2 May 30 22:13:53.322: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6070, will wait for the garbage collector to delete the pods May 30 22:13:53.384: INFO: Deleting DaemonSet.extensions daemon-set took: 7.208391ms May 30 22:13:53.784: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.330553ms May 30 22:13:59.589: INFO: Number of nodes with available pods: 0 May 30 22:13:59.589: INFO: Number of running nodes: 0, number of available pods: 0 May 30 22:13:59.592: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6070/daemonsets","resourceVersion":"20445144"},"items":null} May 30 22:13:59.595: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6070/pods","resourceVersion":"20445144"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:13:59.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6070" for this suite. • [SLOW TEST:29.605 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":236,"skipped":3867,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:13:59.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 30 22:14:03.761: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:14:03.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3066" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3885,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:14:03.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-wnfs STEP: Creating a pod to test atomic-volume-subpath May 30 22:14:03.945: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-wnfs" in namespace "subpath-3815" to be "success or failure" May 30 22:14:03.977: INFO: Pod "pod-subpath-test-secret-wnfs": Phase="Pending", Reason="", readiness=false. Elapsed: 31.693229ms May 30 22:14:05.981: INFO: Pod "pod-subpath-test-secret-wnfs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036320567s May 30 22:14:07.986: INFO: Pod "pod-subpath-test-secret-wnfs": Phase="Running", Reason="", readiness=true. Elapsed: 4.040492824s May 30 22:14:09.989: INFO: Pod "pod-subpath-test-secret-wnfs": Phase="Running", Reason="", readiness=true. Elapsed: 6.044147161s May 30 22:14:11.992: INFO: Pod "pod-subpath-test-secret-wnfs": Phase="Running", Reason="", readiness=true. Elapsed: 8.047159935s May 30 22:14:13.996: INFO: Pod "pod-subpath-test-secret-wnfs": Phase="Running", Reason="", readiness=true. Elapsed: 10.050956549s May 30 22:14:16.000: INFO: Pod "pod-subpath-test-secret-wnfs": Phase="Running", Reason="", readiness=true. Elapsed: 12.054767505s May 30 22:14:18.004: INFO: Pod "pod-subpath-test-secret-wnfs": Phase="Running", Reason="", readiness=true. Elapsed: 14.058663721s May 30 22:14:20.008: INFO: Pod "pod-subpath-test-secret-wnfs": Phase="Running", Reason="", readiness=true. Elapsed: 16.062679495s May 30 22:14:22.012: INFO: Pod "pod-subpath-test-secret-wnfs": Phase="Running", Reason="", readiness=true. Elapsed: 18.067183277s May 30 22:14:24.017: INFO: Pod "pod-subpath-test-secret-wnfs": Phase="Running", Reason="", readiness=true. Elapsed: 20.071945532s May 30 22:14:26.087: INFO: Pod "pod-subpath-test-secret-wnfs": Phase="Running", Reason="", readiness=true. Elapsed: 22.141711071s May 30 22:14:28.123: INFO: Pod "pod-subpath-test-secret-wnfs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.178277487s STEP: Saw pod success May 30 22:14:28.123: INFO: Pod "pod-subpath-test-secret-wnfs" satisfied condition "success or failure" May 30 22:14:28.126: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-wnfs container test-container-subpath-secret-wnfs: STEP: delete the pod May 30 22:14:28.758: INFO: Waiting for pod pod-subpath-test-secret-wnfs to disappear May 30 22:14:28.781: INFO: Pod pod-subpath-test-secret-wnfs no longer exists STEP: Deleting pod pod-subpath-test-secret-wnfs May 30 22:14:28.781: INFO: Deleting pod "pod-subpath-test-secret-wnfs" in namespace "subpath-3815" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:14:28.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3815" for this suite. • [SLOW TEST:25.229 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":238,"skipped":3911,"failed":0} SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:14:29.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 30 22:14:29.304: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:14:38.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6903" for this suite. • [SLOW TEST:9.869 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":239,"skipped":3915,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:14:38.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-c2f4c067-2789-4d12-b8fa-d799551a2017 STEP: Creating a pod to test consume secrets May 30 22:14:39.001: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c18d83ed-9cec-4ff3-8580-ce62cd97c36c" in namespace "projected-2024" to be "success or failure" May 30 22:14:39.016: INFO: Pod "pod-projected-secrets-c18d83ed-9cec-4ff3-8580-ce62cd97c36c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.196968ms May 30 22:14:41.112: INFO: Pod "pod-projected-secrets-c18d83ed-9cec-4ff3-8580-ce62cd97c36c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111382357s May 30 22:14:43.213: INFO: Pod "pod-projected-secrets-c18d83ed-9cec-4ff3-8580-ce62cd97c36c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.212584782s STEP: Saw pod success May 30 22:14:43.213: INFO: Pod "pod-projected-secrets-c18d83ed-9cec-4ff3-8580-ce62cd97c36c" satisfied condition "success or failure" May 30 22:14:43.218: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-c18d83ed-9cec-4ff3-8580-ce62cd97c36c container projected-secret-volume-test: STEP: delete the pod May 30 22:14:43.371: INFO: Waiting for pod pod-projected-secrets-c18d83ed-9cec-4ff3-8580-ce62cd97c36c to disappear May 30 22:14:43.394: INFO: Pod pod-projected-secrets-c18d83ed-9cec-4ff3-8580-ce62cd97c36c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:14:43.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2024" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3921,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:14:43.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 22:14:44.023: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c3a822c7-d1e7-40c5-9f8d-65df6f7bf369", Controller:(*bool)(0xc001f703a2), BlockOwnerDeletion:(*bool)(0xc001f703a3)}} May 30 22:14:44.108: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c16bf518-0e00-4310-a574-c5221f22a913", Controller:(*bool)(0xc002e485aa), BlockOwnerDeletion:(*bool)(0xc002e485ab)}} May 30 22:14:44.190: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"768520ee-1db7-4ebb-b086-edb586655396", Controller:(*bool)(0xc002c3a23a), BlockOwnerDeletion:(*bool)(0xc002c3a23b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:14:49.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2303" for this suite. • [SLOW TEST:5.935 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":241,"skipped":3946,"failed":0} [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:14:49.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 30 22:14:49.394: INFO: Waiting up to 5m0s for pod "pod-d0bf690c-e46f-4a81-9dd3-9a3936770a3a" in namespace "emptydir-7789" to be "success or failure" May 30 22:14:49.406: INFO: Pod "pod-d0bf690c-e46f-4a81-9dd3-9a3936770a3a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.249834ms May 30 22:14:51.411: INFO: Pod "pod-d0bf690c-e46f-4a81-9dd3-9a3936770a3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016739181s May 30 22:14:53.415: INFO: Pod "pod-d0bf690c-e46f-4a81-9dd3-9a3936770a3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020591178s STEP: Saw pod success May 30 22:14:53.415: INFO: Pod "pod-d0bf690c-e46f-4a81-9dd3-9a3936770a3a" satisfied condition "success or failure" May 30 22:14:53.417: INFO: Trying to get logs from node jerma-worker2 pod pod-d0bf690c-e46f-4a81-9dd3-9a3936770a3a container test-container: STEP: delete the pod May 30 22:14:53.472: INFO: Waiting for pod pod-d0bf690c-e46f-4a81-9dd3-9a3936770a3a to disappear May 30 22:14:53.483: INFO: Pod pod-d0bf690c-e46f-4a81-9dd3-9a3936770a3a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:14:53.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7789" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3946,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:14:53.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 30 22:14:53.529: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 30 22:15:03.929: INFO: >>> kubeConfig: /root/.kube/config May 30 22:15:06.859: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:15:17.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1478" for this suite. • [SLOW TEST:23.859 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":243,"skipped":3951,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:15:17.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 30 22:15:17.441: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9870 /api/v1/namespaces/watch-9870/configmaps/e2e-watch-test-configmap-a 2eca67cb-4504-4e63-86db-2001ae7cfc96 20445603 0 2020-05-30 22:15:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 30 22:15:17.442: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9870 /api/v1/namespaces/watch-9870/configmaps/e2e-watch-test-configmap-a 2eca67cb-4504-4e63-86db-2001ae7cfc96 20445603 0 2020-05-30 22:15:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 30 22:15:27.448: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9870 /api/v1/namespaces/watch-9870/configmaps/e2e-watch-test-configmap-a 2eca67cb-4504-4e63-86db-2001ae7cfc96 20445637 0 2020-05-30 22:15:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 30 22:15:27.448: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9870 /api/v1/namespaces/watch-9870/configmaps/e2e-watch-test-configmap-a 2eca67cb-4504-4e63-86db-2001ae7cfc96 20445637 0 2020-05-30 22:15:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 30 22:15:37.457: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9870 /api/v1/namespaces/watch-9870/configmaps/e2e-watch-test-configmap-a 2eca67cb-4504-4e63-86db-2001ae7cfc96 20445667 0 2020-05-30 22:15:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 30 22:15:37.457: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9870 /api/v1/namespaces/watch-9870/configmaps/e2e-watch-test-configmap-a 2eca67cb-4504-4e63-86db-2001ae7cfc96 20445667 0 2020-05-30 22:15:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 30 22:15:47.463: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9870 /api/v1/namespaces/watch-9870/configmaps/e2e-watch-test-configmap-a 2eca67cb-4504-4e63-86db-2001ae7cfc96 20445697 0 2020-05-30 22:15:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 30 22:15:47.463: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9870 /api/v1/namespaces/watch-9870/configmaps/e2e-watch-test-configmap-a 2eca67cb-4504-4e63-86db-2001ae7cfc96 20445697 0 2020-05-30 22:15:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 30 22:15:57.469: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9870 /api/v1/namespaces/watch-9870/configmaps/e2e-watch-test-configmap-b 1aefd300-7b4b-4aac-b480-e2bf1764def4 20445727 0 2020-05-30 22:15:57 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 30 22:15:57.469: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9870 /api/v1/namespaces/watch-9870/configmaps/e2e-watch-test-configmap-b 1aefd300-7b4b-4aac-b480-e2bf1764def4 20445727 0 2020-05-30 22:15:57 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 30 22:16:07.474: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9870 /api/v1/namespaces/watch-9870/configmaps/e2e-watch-test-configmap-b 1aefd300-7b4b-4aac-b480-e2bf1764def4 20445757 0 2020-05-30 22:15:57 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 30 22:16:07.474: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9870 /api/v1/namespaces/watch-9870/configmaps/e2e-watch-test-configmap-b 1aefd300-7b4b-4aac-b480-e2bf1764def4 20445757 0 2020-05-30 22:15:57 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:16:17.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9870" for this suite. • [SLOW TEST:60.162 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":244,"skipped":3971,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:16:17.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:16:17.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7290" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":245,"skipped":3980,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:16:17.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 22:16:18.120: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 22:16:20.132: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473778, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473778, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473778, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726473778, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 22:16:23.164: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:16:23.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9832" for this suite. STEP: Destroying namespace "webhook-9832-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.879 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":246,"skipped":3980,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:16:23.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-e05cb0f0-d18c-4b65-9f7d-6aceaabd8ae2 May 30 22:16:23.598: INFO: Pod name my-hostname-basic-e05cb0f0-d18c-4b65-9f7d-6aceaabd8ae2: Found 0 pods out of 1 May 30 22:16:28.602: INFO: Pod name my-hostname-basic-e05cb0f0-d18c-4b65-9f7d-6aceaabd8ae2: Found 1 pods out of 1 May 30 22:16:28.602: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e05cb0f0-d18c-4b65-9f7d-6aceaabd8ae2" are running May 30 22:16:28.604: INFO: Pod "my-hostname-basic-e05cb0f0-d18c-4b65-9f7d-6aceaabd8ae2-j79bc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-30 22:16:23 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-30 22:16:27 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-30 22:16:27 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-30 22:16:23 +0000 UTC Reason: Message:}]) May 30 22:16:28.604: INFO: Trying to dial the pod May 30 22:16:33.618: INFO: Controller my-hostname-basic-e05cb0f0-d18c-4b65-9f7d-6aceaabd8ae2: Got expected result from replica 1 [my-hostname-basic-e05cb0f0-d18c-4b65-9f7d-6aceaabd8ae2-j79bc]: "my-hostname-basic-e05cb0f0-d18c-4b65-9f7d-6aceaabd8ae2-j79bc", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:16:33.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8046" for this suite. • [SLOW TEST:10.149 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":247,"skipped":4012,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:16:33.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5908 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 30 22:16:33.707: INFO: Found 0 stateful pods, waiting for 3 May 30 22:16:43.712: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 30 22:16:43.712: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 30 22:16:43.712: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 30 22:16:53.712: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 30 22:16:53.712: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 30 22:16:53.712: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 30 22:16:53.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5908 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 30 22:16:54.013: INFO: stderr: "I0530 22:16:53.857556 4123 log.go:172] (0xc0000f4c60) (0xc000a0e140) Create stream\nI0530 22:16:53.857651 4123 log.go:172] (0xc0000f4c60) (0xc000a0e140) Stream added, broadcasting: 1\nI0530 22:16:53.860401 4123 log.go:172] (0xc0000f4c60) Reply frame received for 1\nI0530 22:16:53.860447 4123 log.go:172] (0xc0000f4c60) (0xc00060db80) Create stream\nI0530 22:16:53.860457 4123 log.go:172] (0xc0000f4c60) (0xc00060db80) Stream added, broadcasting: 3\nI0530 22:16:53.861812 4123 log.go:172] (0xc0000f4c60) Reply frame received for 3\nI0530 22:16:53.861845 4123 log.go:172] (0xc0000f4c60) (0xc0003dd540) Create stream\nI0530 22:16:53.861857 4123 log.go:172] (0xc0000f4c60) (0xc0003dd540) Stream added, broadcasting: 5\nI0530 22:16:53.862731 4123 log.go:172] (0xc0000f4c60) Reply frame received for 5\nI0530 22:16:53.943967 4123 log.go:172] (0xc0000f4c60) Data frame received for 5\nI0530 22:16:53.943989 4123 log.go:172] (0xc0003dd540) (5) Data frame handling\nI0530 22:16:53.944001 4123 log.go:172] (0xc0003dd540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0530 22:16:54.003086 4123 log.go:172] (0xc0000f4c60) Data frame received for 3\nI0530 22:16:54.003128 4123 log.go:172] (0xc00060db80) (3) Data frame handling\nI0530 22:16:54.003256 4123 log.go:172] (0xc00060db80) (3) Data frame sent\nI0530 22:16:54.003302 4123 log.go:172] (0xc0000f4c60) Data frame received for 5\nI0530 22:16:54.003319 4123 log.go:172] (0xc0003dd540) (5) Data frame handling\nI0530 22:16:54.003648 4123 log.go:172] (0xc0000f4c60) Data frame received for 3\nI0530 22:16:54.003666 4123 log.go:172] (0xc00060db80) (3) Data frame handling\nI0530 22:16:54.005736 4123 log.go:172] (0xc0000f4c60) Data frame received for 1\nI0530 22:16:54.005753 4123 log.go:172] (0xc000a0e140) (1) Data frame handling\nI0530 22:16:54.005765 4123 log.go:172] (0xc000a0e140) (1) Data frame sent\nI0530 22:16:54.005777 4123 log.go:172] (0xc0000f4c60) (0xc000a0e140) Stream removed, broadcasting: 1\nI0530 22:16:54.005796 4123 log.go:172] (0xc0000f4c60) Go away received\nI0530 22:16:54.006214 4123 log.go:172] (0xc0000f4c60) (0xc000a0e140) Stream removed, broadcasting: 1\nI0530 22:16:54.006246 4123 log.go:172] (0xc0000f4c60) (0xc00060db80) Stream removed, broadcasting: 3\nI0530 22:16:54.006263 4123 log.go:172] (0xc0000f4c60) (0xc0003dd540) Stream removed, broadcasting: 5\n" May 30 22:16:54.013: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 30 22:16:54.013: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 30 22:17:04.050: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 30 22:17:14.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5908 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 22:17:14.312: INFO: stderr: "I0530 22:17:14.225356 4144 log.go:172] (0xc00092ea50) (0xc00065bb80) Create stream\nI0530 22:17:14.225416 4144 log.go:172] (0xc00092ea50) (0xc00065bb80) Stream added, broadcasting: 1\nI0530 22:17:14.227713 4144 log.go:172] (0xc00092ea50) Reply frame received for 1\nI0530 22:17:14.227743 4144 log.go:172] (0xc00092ea50) (0xc0008e8000) Create stream\nI0530 22:17:14.227752 4144 log.go:172] (0xc00092ea50) (0xc0008e8000) Stream added, broadcasting: 3\nI0530 22:17:14.228930 4144 log.go:172] (0xc00092ea50) Reply frame received for 3\nI0530 22:17:14.228953 4144 log.go:172] (0xc00092ea50) (0xc0002b4000) Create stream\nI0530 22:17:14.228960 4144 log.go:172] (0xc00092ea50) (0xc0002b4000) Stream added, broadcasting: 5\nI0530 22:17:14.229940 4144 log.go:172] (0xc00092ea50) Reply frame received for 5\nI0530 22:17:14.304532 4144 log.go:172] (0xc00092ea50) Data frame received for 3\nI0530 22:17:14.304568 4144 log.go:172] (0xc0008e8000) (3) Data frame handling\nI0530 22:17:14.304579 4144 log.go:172] (0xc0008e8000) (3) Data frame sent\nI0530 22:17:14.304586 4144 log.go:172] (0xc00092ea50) Data frame received for 3\nI0530 22:17:14.304592 4144 log.go:172] (0xc0008e8000) (3) Data frame handling\nI0530 22:17:14.304617 4144 log.go:172] (0xc00092ea50) Data frame received for 5\nI0530 22:17:14.304626 4144 log.go:172] (0xc0002b4000) (5) Data frame handling\nI0530 22:17:14.304642 4144 log.go:172] (0xc0002b4000) (5) Data frame sent\nI0530 22:17:14.304653 4144 log.go:172] (0xc00092ea50) Data frame received for 5\nI0530 22:17:14.304659 4144 log.go:172] (0xc0002b4000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0530 22:17:14.306213 4144 log.go:172] (0xc00092ea50) Data frame received for 1\nI0530 22:17:14.306239 4144 log.go:172] (0xc00065bb80) (1) Data frame handling\nI0530 22:17:14.306269 4144 log.go:172] (0xc00065bb80) (1) Data frame sent\nI0530 22:17:14.306294 4144 log.go:172] (0xc00092ea50) (0xc00065bb80) Stream removed, broadcasting: 1\nI0530 22:17:14.306315 4144 log.go:172] (0xc00092ea50) Go away received\nI0530 22:17:14.306643 4144 log.go:172] (0xc00092ea50) (0xc00065bb80) Stream removed, broadcasting: 1\nI0530 22:17:14.306658 4144 log.go:172] (0xc00092ea50) (0xc0008e8000) Stream removed, broadcasting: 3\nI0530 22:17:14.306665 4144 log.go:172] (0xc00092ea50) (0xc0002b4000) Stream removed, broadcasting: 5\n" May 30 22:17:14.312: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 30 22:17:14.312: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 30 22:17:34.333: INFO: Waiting for StatefulSet statefulset-5908/ss2 to complete update May 30 22:17:34.333: INFO: Waiting for Pod statefulset-5908/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 30 22:17:44.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5908 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 30 22:17:44.614: INFO: stderr: "I0530 22:17:44.475242 4166 log.go:172] (0xc0000f4b00) (0xc000609b80) Create stream\nI0530 22:17:44.475311 4166 log.go:172] (0xc0000f4b00) (0xc000609b80) Stream added, broadcasting: 1\nI0530 22:17:44.478295 4166 log.go:172] (0xc0000f4b00) Reply frame received for 1\nI0530 22:17:44.478317 4166 log.go:172] (0xc0000f4b00) (0xc000609d60) Create stream\nI0530 22:17:44.478323 4166 log.go:172] (0xc0000f4b00) (0xc000609d60) Stream added, broadcasting: 3\nI0530 22:17:44.479106 4166 log.go:172] (0xc0000f4b00) Reply frame received for 3\nI0530 22:17:44.479153 4166 log.go:172] (0xc0000f4b00) (0xc000609e00) Create stream\nI0530 22:17:44.479167 4166 log.go:172] (0xc0000f4b00) (0xc000609e00) Stream added, broadcasting: 5\nI0530 22:17:44.480022 4166 log.go:172] (0xc0000f4b00) Reply frame received for 5\nI0530 22:17:44.573764 4166 log.go:172] (0xc0000f4b00) Data frame received for 5\nI0530 22:17:44.573798 4166 log.go:172] (0xc000609e00) (5) Data frame handling\nI0530 22:17:44.573957 4166 log.go:172] (0xc000609e00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0530 22:17:44.606387 4166 log.go:172] (0xc0000f4b00) Data frame received for 5\nI0530 22:17:44.606449 4166 log.go:172] (0xc000609e00) (5) Data frame handling\nI0530 22:17:44.606486 4166 log.go:172] (0xc0000f4b00) Data frame received for 3\nI0530 22:17:44.606519 4166 log.go:172] (0xc000609d60) (3) Data frame handling\nI0530 22:17:44.606547 4166 log.go:172] (0xc000609d60) (3) Data frame sent\nI0530 22:17:44.606760 4166 log.go:172] (0xc0000f4b00) Data frame received for 3\nI0530 22:17:44.606772 4166 log.go:172] (0xc000609d60) (3) Data frame handling\nI0530 22:17:44.608402 4166 log.go:172] (0xc0000f4b00) Data frame received for 1\nI0530 22:17:44.608438 4166 log.go:172] (0xc000609b80) (1) Data frame handling\nI0530 22:17:44.608466 4166 log.go:172] (0xc000609b80) (1) Data frame sent\nI0530 22:17:44.608488 4166 log.go:172] (0xc0000f4b00) (0xc000609b80) Stream removed, broadcasting: 1\nI0530 22:17:44.608514 4166 log.go:172] (0xc0000f4b00) Go away received\nI0530 22:17:44.608863 4166 log.go:172] (0xc0000f4b00) (0xc000609b80) Stream removed, broadcasting: 1\nI0530 22:17:44.608876 4166 log.go:172] (0xc0000f4b00) (0xc000609d60) Stream removed, broadcasting: 3\nI0530 22:17:44.608881 4166 log.go:172] (0xc0000f4b00) (0xc000609e00) Stream removed, broadcasting: 5\n" May 30 22:17:44.614: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 30 22:17:44.614: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 30 22:17:54.648: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 30 22:18:04.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5908 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 22:18:04.916: INFO: stderr: "I0530 22:18:04.848864 4186 log.go:172] (0xc000a016b0) (0xc000a386e0) Create stream\nI0530 22:18:04.848926 4186 log.go:172] (0xc000a016b0) (0xc000a386e0) Stream added, broadcasting: 1\nI0530 22:18:04.854415 4186 log.go:172] (0xc000a016b0) Reply frame received for 1\nI0530 22:18:04.854454 4186 log.go:172] (0xc000a016b0) (0xc000a38000) Create stream\nI0530 22:18:04.854461 4186 log.go:172] (0xc000a016b0) (0xc000a38000) Stream added, broadcasting: 3\nI0530 22:18:04.855321 4186 log.go:172] (0xc000a016b0) Reply frame received for 3\nI0530 22:18:04.855362 4186 log.go:172] (0xc000a016b0) (0xc000a380a0) Create stream\nI0530 22:18:04.855377 4186 log.go:172] (0xc000a016b0) (0xc000a380a0) Stream added, broadcasting: 5\nI0530 22:18:04.856169 4186 log.go:172] (0xc000a016b0) Reply frame received for 5\nI0530 22:18:04.910042 4186 log.go:172] (0xc000a016b0) Data frame received for 5\nI0530 22:18:04.910081 4186 log.go:172] (0xc000a380a0) (5) Data frame handling\nI0530 22:18:04.910096 4186 log.go:172] (0xc000a380a0) (5) Data frame sent\nI0530 22:18:04.910105 4186 log.go:172] (0xc000a016b0) Data frame received for 5\nI0530 22:18:04.910111 4186 log.go:172] (0xc000a380a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0530 22:18:04.910133 4186 log.go:172] (0xc000a016b0) Data frame received for 3\nI0530 22:18:04.910141 4186 log.go:172] (0xc000a38000) (3) Data frame handling\nI0530 22:18:04.910155 4186 log.go:172] (0xc000a38000) (3) Data frame sent\nI0530 22:18:04.910163 4186 log.go:172] (0xc000a016b0) Data frame received for 3\nI0530 22:18:04.910170 4186 log.go:172] (0xc000a38000) (3) Data frame handling\nI0530 22:18:04.911251 4186 log.go:172] (0xc000a016b0) Data frame received for 1\nI0530 22:18:04.911270 4186 log.go:172] (0xc000a386e0) (1) Data frame handling\nI0530 22:18:04.911282 4186 log.go:172] (0xc000a386e0) (1) Data frame sent\nI0530 22:18:04.911295 4186 log.go:172] (0xc000a016b0) (0xc000a386e0) Stream removed, broadcasting: 1\nI0530 22:18:04.911328 4186 log.go:172] (0xc000a016b0) Go away received\nI0530 22:18:04.911547 4186 log.go:172] (0xc000a016b0) (0xc000a386e0) Stream removed, broadcasting: 1\nI0530 22:18:04.911559 4186 log.go:172] (0xc000a016b0) (0xc000a38000) Stream removed, broadcasting: 3\nI0530 22:18:04.911565 4186 log.go:172] (0xc000a016b0) (0xc000a380a0) Stream removed, broadcasting: 5\n" May 30 22:18:04.917: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 30 22:18:04.917: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 30 22:18:14.938: INFO: Waiting for StatefulSet statefulset-5908/ss2 to complete update May 30 22:18:14.938: INFO: Waiting for Pod statefulset-5908/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 30 22:18:14.938: INFO: Waiting for Pod statefulset-5908/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 30 22:18:14.938: INFO: Waiting for Pod statefulset-5908/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 30 22:18:24.944: INFO: Waiting for StatefulSet statefulset-5908/ss2 to complete update May 30 22:18:24.944: INFO: Waiting for Pod statefulset-5908/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 30 22:18:24.944: INFO: Waiting for Pod statefulset-5908/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 30 22:18:34.946: INFO: Waiting for StatefulSet statefulset-5908/ss2 to complete update May 30 22:18:34.946: INFO: Waiting for Pod statefulset-5908/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 30 22:18:44.946: INFO: Deleting all statefulset in ns statefulset-5908 May 30 22:18:44.950: INFO: Scaling statefulset ss2 to 0 May 30 22:19:14.972: INFO: Waiting for statefulset status.replicas updated to 0 May 30 22:19:14.975: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:19:14.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5908" for this suite. • [SLOW TEST:161.402 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":248,"skipped":4021,"failed":0} [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:19:15.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 22:19:15.119: INFO: Create a RollingUpdate DaemonSet May 30 22:19:15.122: INFO: Check that daemon pods launch on every node of the cluster May 30 22:19:15.125: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:19:15.157: INFO: Number of nodes with available pods: 0 May 30 22:19:15.157: INFO: Node jerma-worker is running more than one daemon pod May 30 22:19:16.213: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:19:16.217: INFO: Number of nodes with available pods: 0 May 30 22:19:16.217: INFO: Node jerma-worker is running more than one daemon pod May 30 22:19:17.298: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:19:17.342: INFO: Number of nodes with available pods: 0 May 30 22:19:17.343: INFO: Node jerma-worker is running more than one daemon pod May 30 22:19:18.167: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:19:18.171: INFO: Number of nodes with available pods: 0 May 30 22:19:18.171: INFO: Node jerma-worker is running more than one daemon pod May 30 22:19:19.174: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:19:19.177: INFO: Number of nodes with available pods: 1 May 30 22:19:19.177: INFO: Node jerma-worker2 is running more than one daemon pod May 30 22:19:20.170: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:19:20.180: INFO: Number of nodes with available pods: 2 May 30 22:19:20.180: INFO: Number of running nodes: 2, number of available pods: 2 May 30 22:19:20.180: INFO: Update the DaemonSet to trigger a rollout May 30 22:19:20.191: INFO: Updating DaemonSet daemon-set May 30 22:19:29.270: INFO: Roll back the DaemonSet before rollout is complete May 30 22:19:29.303: INFO: Updating DaemonSet daemon-set May 30 22:19:29.303: INFO: Make sure DaemonSet rollback is complete May 30 22:19:29.331: INFO: Wrong image for pod: daemon-set-cf8p4. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 30 22:19:29.331: INFO: Pod daemon-set-cf8p4 is not available May 30 22:19:29.347: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:19:30.351: INFO: Wrong image for pod: daemon-set-cf8p4. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 30 22:19:30.351: INFO: Pod daemon-set-cf8p4 is not available May 30 22:19:30.354: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:19:31.350: INFO: Wrong image for pod: daemon-set-cf8p4. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 30 22:19:31.350: INFO: Pod daemon-set-cf8p4 is not available May 30 22:19:31.354: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:19:32.352: INFO: Wrong image for pod: daemon-set-cf8p4. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 30 22:19:32.352: INFO: Pod daemon-set-cf8p4 is not available May 30 22:19:32.355: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:19:33.352: INFO: Wrong image for pod: daemon-set-cf8p4. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 30 22:19:33.352: INFO: Pod daemon-set-cf8p4 is not available May 30 22:19:33.356: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:19:34.350: INFO: Wrong image for pod: daemon-set-cf8p4. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 30 22:19:34.350: INFO: Pod daemon-set-cf8p4 is not available May 30 22:19:34.354: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 22:19:35.352: INFO: Pod daemon-set-5wg58 is not available May 30 22:19:35.356: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1594, will wait for the garbage collector to delete the pods May 30 22:19:35.422: INFO: Deleting DaemonSet.extensions daemon-set took: 6.230003ms May 30 22:19:35.722: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.267979ms May 30 22:19:49.565: INFO: Number of nodes with available pods: 0 May 30 22:19:49.565: INFO: Number of running nodes: 0, number of available pods: 0 May 30 22:19:49.568: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1594/daemonsets","resourceVersion":"20446952"},"items":null} May 30 22:19:49.571: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1594/pods","resourceVersion":"20446952"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:19:49.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1594" for this suite. • [SLOW TEST:34.561 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":249,"skipped":4021,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:19:49.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-c4f42727-b88f-4bd0-9e40-947d6651221d STEP: Creating a pod to test consume secrets May 30 22:19:49.752: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d6b5a5b0-2668-4bf8-84d5-f1596b2bcfd1" in namespace "projected-9968" to be "success or failure" May 30 22:19:49.755: INFO: Pod "pod-projected-secrets-d6b5a5b0-2668-4bf8-84d5-f1596b2bcfd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.916093ms May 30 22:19:51.759: INFO: Pod "pod-projected-secrets-d6b5a5b0-2668-4bf8-84d5-f1596b2bcfd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007037189s May 30 22:19:53.762: INFO: Pod "pod-projected-secrets-d6b5a5b0-2668-4bf8-84d5-f1596b2bcfd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01057208s STEP: Saw pod success May 30 22:19:53.763: INFO: Pod "pod-projected-secrets-d6b5a5b0-2668-4bf8-84d5-f1596b2bcfd1" satisfied condition "success or failure" May 30 22:19:53.765: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-d6b5a5b0-2668-4bf8-84d5-f1596b2bcfd1 container projected-secret-volume-test: STEP: delete the pod May 30 22:19:53.792: INFO: Waiting for pod pod-projected-secrets-d6b5a5b0-2668-4bf8-84d5-f1596b2bcfd1 to disappear May 30 22:19:53.833: INFO: Pod pod-projected-secrets-d6b5a5b0-2668-4bf8-84d5-f1596b2bcfd1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:19:53.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9968" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4023,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:19:53.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0530 22:20:34.357382 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 30 22:20:34.357: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:20:34.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3333" for this suite. • [SLOW TEST:40.497 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":251,"skipped":4092,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:20:34.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:20:38.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4152" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4105,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:20:38.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:20:58.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3796" for this suite. • [SLOW TEST:20.202 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":253,"skipped":4116,"failed":0} SSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:20:58.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 22:20:58.908: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-6224 I0530 22:20:58.924677 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6224, replica count: 1 I0530 22:20:59.975085 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 22:21:00.975328 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 22:21:01.975544 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 30 22:21:02.117: INFO: Created: latency-svc-fpmqb May 30 22:21:02.160: INFO: Got endpoints: latency-svc-fpmqb [84.305949ms] May 30 22:21:02.247: INFO: Created: latency-svc-7dtvc May 30 22:21:02.269: INFO: Got endpoints: latency-svc-7dtvc [109.054068ms] May 30 22:21:02.295: INFO: Created: latency-svc-d7g49 May 30 22:21:02.310: INFO: Got endpoints: latency-svc-d7g49 [150.187428ms] May 30 22:21:02.369: INFO: Created: latency-svc-llccc May 30 22:21:02.398: INFO: Got endpoints: latency-svc-llccc [238.360714ms] May 30 22:21:02.440: INFO: Created: latency-svc-j5f82 May 30 22:21:02.467: INFO: Got endpoints: latency-svc-j5f82 [306.745266ms] May 30 22:21:02.518: INFO: Created: latency-svc-s4nss May 30 22:21:02.533: INFO: Got endpoints: latency-svc-s4nss [373.196967ms] May 30 22:21:02.560: INFO: Created: latency-svc-ftkls May 30 22:21:02.591: INFO: Got endpoints: latency-svc-ftkls [430.484659ms] May 30 22:21:02.638: INFO: Created: latency-svc-hn5dz May 30 22:21:02.641: INFO: Got endpoints: latency-svc-hn5dz [481.465574ms] May 30 22:21:02.697: INFO: Created: latency-svc-9tqwc May 30 22:21:02.713: INFO: Got endpoints: latency-svc-9tqwc [553.547855ms] May 30 22:21:02.794: INFO: Created: latency-svc-q4g9t May 30 22:21:02.798: INFO: Got endpoints: latency-svc-q4g9t [638.427948ms] May 30 22:21:02.885: INFO: Created: latency-svc-gmkt5 May 30 22:21:02.932: INFO: Got endpoints: latency-svc-gmkt5 [771.91774ms] May 30 22:21:02.955: INFO: Created: latency-svc-gcsrz May 30 22:21:02.972: INFO: Got endpoints: latency-svc-gcsrz [812.144793ms] May 30 22:21:02.998: INFO: Created: latency-svc-7bhbm May 30 22:21:03.015: INFO: Got endpoints: latency-svc-7bhbm [854.592059ms] May 30 22:21:03.075: INFO: Created: latency-svc-zms8p May 30 22:21:03.088: INFO: Got endpoints: latency-svc-zms8p [927.974201ms] May 30 22:21:03.125: INFO: Created: latency-svc-26zpz May 30 22:21:03.135: INFO: Got endpoints: latency-svc-26zpz [974.752326ms] May 30 22:21:03.169: INFO: Created: latency-svc-fct2h May 30 22:21:03.219: INFO: Got endpoints: latency-svc-fct2h [1.058882424s] May 30 22:21:03.248: INFO: Created: latency-svc-vpfr4 May 30 22:21:03.282: INFO: Got endpoints: latency-svc-vpfr4 [1.013389639s] May 30 22:21:03.304: INFO: Created: latency-svc-lxjts May 30 22:21:03.357: INFO: Got endpoints: latency-svc-lxjts [1.047425774s] May 30 22:21:03.388: INFO: Created: latency-svc-gkbtp May 30 22:21:03.407: INFO: Got endpoints: latency-svc-gkbtp [1.009029499s] May 30 22:21:03.429: INFO: Created: latency-svc-gwc9r May 30 22:21:03.444: INFO: Got endpoints: latency-svc-gwc9r [977.293799ms] May 30 22:21:03.507: INFO: Created: latency-svc-6kcxx May 30 22:21:03.523: INFO: Got endpoints: latency-svc-6kcxx [990.303902ms] May 30 22:21:03.561: INFO: Created: latency-svc-r5hg6 May 30 22:21:03.570: INFO: Got endpoints: latency-svc-r5hg6 [979.665128ms] May 30 22:21:03.598: INFO: Created: latency-svc-kbclb May 30 22:21:03.644: INFO: Got endpoints: latency-svc-kbclb [1.00230907s] May 30 22:21:03.664: INFO: Created: latency-svc-qvtlz May 30 22:21:03.680: INFO: Got endpoints: latency-svc-qvtlz [966.093426ms] May 30 22:21:03.711: INFO: Created: latency-svc-nj6fb May 30 22:21:03.722: INFO: Got endpoints: latency-svc-nj6fb [923.558827ms] May 30 22:21:03.801: INFO: Created: latency-svc-tms67 May 30 22:21:03.804: INFO: Got endpoints: latency-svc-tms67 [871.949931ms] May 30 22:21:03.860: INFO: Created: latency-svc-kckzc May 30 22:21:03.955: INFO: Got endpoints: latency-svc-kckzc [983.088239ms] May 30 22:21:03.968: INFO: Created: latency-svc-hnxqn May 30 22:21:03.987: INFO: Got endpoints: latency-svc-hnxqn [971.900862ms] May 30 22:21:04.023: INFO: Created: latency-svc-khqqp May 30 22:21:04.042: INFO: Got endpoints: latency-svc-khqqp [953.486682ms] May 30 22:21:04.100: INFO: Created: latency-svc-tm4qv May 30 22:21:04.108: INFO: Got endpoints: latency-svc-tm4qv [972.595442ms] May 30 22:21:04.185: INFO: Created: latency-svc-8fsnn May 30 22:21:04.273: INFO: Got endpoints: latency-svc-8fsnn [1.054038354s] May 30 22:21:04.336: INFO: Created: latency-svc-w6llc May 30 22:21:04.428: INFO: Got endpoints: latency-svc-w6llc [1.146171162s] May 30 22:21:04.449: INFO: Created: latency-svc-jtzcd May 30 22:21:04.462: INFO: Got endpoints: latency-svc-jtzcd [1.10452163s] May 30 22:21:04.517: INFO: Created: latency-svc-wcpqd May 30 22:21:04.554: INFO: Got endpoints: latency-svc-wcpqd [1.146660047s] May 30 22:21:04.583: INFO: Created: latency-svc-8pp8p May 30 22:21:04.601: INFO: Got endpoints: latency-svc-8pp8p [1.156561932s] May 30 22:21:04.641: INFO: Created: latency-svc-wxp6t May 30 22:21:04.698: INFO: Got endpoints: latency-svc-wxp6t [1.174984538s] May 30 22:21:04.708: INFO: Created: latency-svc-fdsmj May 30 22:21:04.727: INFO: Got endpoints: latency-svc-fdsmj [1.157158776s] May 30 22:21:04.750: INFO: Created: latency-svc-vpmc9 May 30 22:21:04.764: INFO: Got endpoints: latency-svc-vpmc9 [1.120651895s] May 30 22:21:04.798: INFO: Created: latency-svc-5s2zs May 30 22:21:04.855: INFO: Created: latency-svc-bzk7n May 30 22:21:04.901: INFO: Got endpoints: latency-svc-bzk7n [1.178927535s] May 30 22:21:04.901: INFO: Got endpoints: latency-svc-5s2zs [1.221260343s] May 30 22:21:04.942: INFO: Created: latency-svc-gzp2v May 30 22:21:05.021: INFO: Got endpoints: latency-svc-gzp2v [1.217692066s] May 30 22:21:05.061: INFO: Created: latency-svc-7b64f May 30 22:21:05.092: INFO: Got endpoints: latency-svc-7b64f [1.136900917s] May 30 22:21:05.159: INFO: Created: latency-svc-lbs77 May 30 22:21:05.162: INFO: Got endpoints: latency-svc-lbs77 [1.175050635s] May 30 22:21:05.211: INFO: Created: latency-svc-t5rfn May 30 22:21:05.228: INFO: Got endpoints: latency-svc-t5rfn [1.186369819s] May 30 22:21:05.308: INFO: Created: latency-svc-jnbfc May 30 22:21:05.324: INFO: Got endpoints: latency-svc-jnbfc [1.216777461s] May 30 22:21:05.441: INFO: Created: latency-svc-dhsbh May 30 22:21:05.458: INFO: Got endpoints: latency-svc-dhsbh [1.184749598s] May 30 22:21:05.489: INFO: Created: latency-svc-d4lds May 30 22:21:05.506: INFO: Got endpoints: latency-svc-d4lds [1.077031059s] May 30 22:21:05.537: INFO: Created: latency-svc-w5lrx May 30 22:21:05.572: INFO: Got endpoints: latency-svc-w5lrx [1.109885694s] May 30 22:21:05.608: INFO: Created: latency-svc-rtt7f May 30 22:21:05.620: INFO: Got endpoints: latency-svc-rtt7f [1.065694742s] May 30 22:21:05.655: INFO: Created: latency-svc-g9g7p May 30 22:21:05.704: INFO: Got endpoints: latency-svc-g9g7p [1.103464446s] May 30 22:21:05.782: INFO: Created: latency-svc-9fhjj May 30 22:21:05.801: INFO: Got endpoints: latency-svc-9fhjj [1.102813621s] May 30 22:21:05.848: INFO: Created: latency-svc-j6stz May 30 22:21:05.861: INFO: Got endpoints: latency-svc-j6stz [1.133721132s] May 30 22:21:05.918: INFO: Created: latency-svc-9xxdk May 30 22:21:05.991: INFO: Got endpoints: latency-svc-9xxdk [1.226783775s] May 30 22:21:06.039: INFO: Created: latency-svc-f6ggr May 30 22:21:06.070: INFO: Got endpoints: latency-svc-f6ggr [1.169122772s] May 30 22:21:06.135: INFO: Created: latency-svc-rz7wf May 30 22:21:06.138: INFO: Got endpoints: latency-svc-rz7wf [1.237260667s] May 30 22:21:06.171: INFO: Created: latency-svc-5qt4t May 30 22:21:06.187: INFO: Got endpoints: latency-svc-5qt4t [1.165347735s] May 30 22:21:06.213: INFO: Created: latency-svc-dhlzg May 30 22:21:06.229: INFO: Got endpoints: latency-svc-dhlzg [1.137194113s] May 30 22:21:06.305: INFO: Created: latency-svc-45nb6 May 30 22:21:06.319: INFO: Got endpoints: latency-svc-45nb6 [1.157237564s] May 30 22:21:06.399: INFO: Created: latency-svc-qnmqr May 30 22:21:06.470: INFO: Got endpoints: latency-svc-qnmqr [1.242014924s] May 30 22:21:06.507: INFO: Created: latency-svc-zx5mj May 30 22:21:06.518: INFO: Got endpoints: latency-svc-zx5mj [1.193240436s] May 30 22:21:06.567: INFO: Created: latency-svc-p2mz6 May 30 22:21:06.614: INFO: Got endpoints: latency-svc-p2mz6 [1.156569723s] May 30 22:21:06.663: INFO: Created: latency-svc-mxstx May 30 22:21:06.681: INFO: Got endpoints: latency-svc-mxstx [1.175021993s] May 30 22:21:06.706: INFO: Created: latency-svc-2bvk6 May 30 22:21:06.746: INFO: Got endpoints: latency-svc-2bvk6 [1.173671971s] May 30 22:21:06.784: INFO: Created: latency-svc-kcdnd May 30 22:21:06.831: INFO: Got endpoints: latency-svc-kcdnd [1.210645713s] May 30 22:21:06.928: INFO: Created: latency-svc-v7kdp May 30 22:21:06.946: INFO: Got endpoints: latency-svc-v7kdp [1.241488235s] May 30 22:21:07.016: INFO: Created: latency-svc-zr6k2 May 30 22:21:07.024: INFO: Got endpoints: latency-svc-zr6k2 [1.22248715s] May 30 22:21:07.053: INFO: Created: latency-svc-f6v8m May 30 22:21:07.066: INFO: Got endpoints: latency-svc-f6v8m [1.205284759s] May 30 22:21:07.100: INFO: Created: latency-svc-dx2xj May 30 22:21:07.147: INFO: Got endpoints: latency-svc-dx2xj [1.155285102s] May 30 22:21:07.161: INFO: Created: latency-svc-xbx2w May 30 22:21:07.175: INFO: Got endpoints: latency-svc-xbx2w [1.105432791s] May 30 22:21:07.209: INFO: Created: latency-svc-krm2v May 30 22:21:07.239: INFO: Got endpoints: latency-svc-krm2v [1.10073011s] May 30 22:21:07.330: INFO: Created: latency-svc-r8tmw May 30 22:21:07.344: INFO: Got endpoints: latency-svc-r8tmw [1.157561894s] May 30 22:21:07.459: INFO: Created: latency-svc-mtvnd May 30 22:21:07.470: INFO: Got endpoints: latency-svc-mtvnd [1.240758219s] May 30 22:21:07.515: INFO: Created: latency-svc-7jq5t May 30 22:21:07.531: INFO: Got endpoints: latency-svc-7jq5t [1.211925288s] May 30 22:21:07.614: INFO: Created: latency-svc-tbkdg May 30 22:21:07.621: INFO: Got endpoints: latency-svc-tbkdg [1.151010908s] May 30 22:21:07.659: INFO: Created: latency-svc-9k598 May 30 22:21:07.675: INFO: Got endpoints: latency-svc-9k598 [1.157541012s] May 30 22:21:07.758: INFO: Created: latency-svc-h8vp7 May 30 22:21:07.774: INFO: Got endpoints: latency-svc-h8vp7 [1.159223697s] May 30 22:21:07.815: INFO: Created: latency-svc-c8wnv May 30 22:21:07.832: INFO: Got endpoints: latency-svc-c8wnv [1.151369033s] May 30 22:21:07.925: INFO: Created: latency-svc-7b27j May 30 22:21:07.930: INFO: Got endpoints: latency-svc-7b27j [1.183819785s] May 30 22:21:08.002: INFO: Created: latency-svc-k4dsd May 30 22:21:08.019: INFO: Got endpoints: latency-svc-k4dsd [1.188634602s] May 30 22:21:08.081: INFO: Created: latency-svc-8dm28 May 30 22:21:08.086: INFO: Got endpoints: latency-svc-8dm28 [1.140025542s] May 30 22:21:08.116: INFO: Created: latency-svc-mwwtv May 30 22:21:08.128: INFO: Got endpoints: latency-svc-mwwtv [1.104104131s] May 30 22:21:08.162: INFO: Created: latency-svc-m5zqt May 30 22:21:08.176: INFO: Got endpoints: latency-svc-m5zqt [1.109955405s] May 30 22:21:08.228: INFO: Created: latency-svc-fkvcb May 30 22:21:08.278: INFO: Got endpoints: latency-svc-fkvcb [1.131481113s] May 30 22:21:08.375: INFO: Created: latency-svc-g4p6t May 30 22:21:08.383: INFO: Got endpoints: latency-svc-g4p6t [1.2069944s] May 30 22:21:08.409: INFO: Created: latency-svc-t9rtk May 30 22:21:08.423: INFO: Got endpoints: latency-svc-t9rtk [1.184418602s] May 30 22:21:08.531: INFO: Created: latency-svc-t8kc6 May 30 22:21:08.534: INFO: Got endpoints: latency-svc-t8kc6 [1.189740755s] May 30 22:21:08.589: INFO: Created: latency-svc-d9fb2 May 30 22:21:08.623: INFO: Got endpoints: latency-svc-d9fb2 [1.152279559s] May 30 22:21:08.684: INFO: Created: latency-svc-9ckbk May 30 22:21:08.701: INFO: Got endpoints: latency-svc-9ckbk [1.170107474s] May 30 22:21:08.732: INFO: Created: latency-svc-9v79s May 30 22:21:08.763: INFO: Got endpoints: latency-svc-9v79s [1.141628354s] May 30 22:21:08.847: INFO: Created: latency-svc-fb77k May 30 22:21:08.870: INFO: Got endpoints: latency-svc-fb77k [1.194615066s] May 30 22:21:08.906: INFO: Created: latency-svc-n2sdh May 30 22:21:08.956: INFO: Got endpoints: latency-svc-n2sdh [1.182085364s] May 30 22:21:08.978: INFO: Created: latency-svc-rmt8w May 30 22:21:09.015: INFO: Got endpoints: latency-svc-rmt8w [1.182747517s] May 30 22:21:09.045: INFO: Created: latency-svc-p9n7c May 30 22:21:09.087: INFO: Got endpoints: latency-svc-p9n7c [1.157762253s] May 30 22:21:09.111: INFO: Created: latency-svc-8pxsl May 30 22:21:09.123: INFO: Got endpoints: latency-svc-8pxsl [1.103479959s] May 30 22:21:09.158: INFO: Created: latency-svc-l8x52 May 30 22:21:09.189: INFO: Got endpoints: latency-svc-l8x52 [1.103085665s] May 30 22:21:09.230: INFO: Created: latency-svc-9cd55 May 30 22:21:09.244: INFO: Got endpoints: latency-svc-9cd55 [1.116354448s] May 30 22:21:09.267: INFO: Created: latency-svc-bndpr May 30 22:21:09.286: INFO: Got endpoints: latency-svc-bndpr [1.1099411s] May 30 22:21:09.315: INFO: Created: latency-svc-zhd88 May 30 22:21:09.363: INFO: Got endpoints: latency-svc-zhd88 [1.084535329s] May 30 22:21:09.411: INFO: Created: latency-svc-klxdb May 30 22:21:09.446: INFO: Got endpoints: latency-svc-klxdb [1.063153503s] May 30 22:21:09.512: INFO: Created: latency-svc-l4sjj May 30 22:21:09.528: INFO: Got endpoints: latency-svc-l4sjj [1.104177917s] May 30 22:21:09.555: INFO: Created: latency-svc-mpkc2 May 30 22:21:09.570: INFO: Got endpoints: latency-svc-mpkc2 [1.035744001s] May 30 22:21:09.597: INFO: Created: latency-svc-dp497 May 30 22:21:09.607: INFO: Got endpoints: latency-svc-dp497 [983.954912ms] May 30 22:21:09.676: INFO: Created: latency-svc-f5c5t May 30 22:21:09.683: INFO: Got endpoints: latency-svc-f5c5t [981.491922ms] May 30 22:21:09.704: INFO: Created: latency-svc-sd9n2 May 30 22:21:09.713: INFO: Got endpoints: latency-svc-sd9n2 [950.218384ms] May 30 22:21:09.735: INFO: Created: latency-svc-n6gb2 May 30 22:21:09.750: INFO: Got endpoints: latency-svc-n6gb2 [879.914146ms] May 30 22:21:09.771: INFO: Created: latency-svc-46lnr May 30 22:21:09.820: INFO: Got endpoints: latency-svc-46lnr [864.561715ms] May 30 22:21:09.830: INFO: Created: latency-svc-j5pj2 May 30 22:21:09.846: INFO: Got endpoints: latency-svc-j5pj2 [831.498054ms] May 30 22:21:09.866: INFO: Created: latency-svc-hsrt8 May 30 22:21:09.877: INFO: Got endpoints: latency-svc-hsrt8 [789.059566ms] May 30 22:21:09.980: INFO: Created: latency-svc-4cvrv May 30 22:21:10.005: INFO: Got endpoints: latency-svc-4cvrv [881.680668ms] May 30 22:21:10.046: INFO: Created: latency-svc-skffb May 30 22:21:10.063: INFO: Got endpoints: latency-svc-skffb [874.404128ms] May 30 22:21:10.123: INFO: Created: latency-svc-hjsl8 May 30 22:21:10.127: INFO: Got endpoints: latency-svc-hjsl8 [882.627367ms] May 30 22:21:10.174: INFO: Created: latency-svc-942zm May 30 22:21:10.221: INFO: Got endpoints: latency-svc-942zm [934.672213ms] May 30 22:21:10.281: INFO: Created: latency-svc-g6ss6 May 30 22:21:10.322: INFO: Got endpoints: latency-svc-g6ss6 [959.566102ms] May 30 22:21:10.434: INFO: Created: latency-svc-6bhzr May 30 22:21:10.443: INFO: Got endpoints: latency-svc-6bhzr [997.400458ms] May 30 22:21:10.467: INFO: Created: latency-svc-6d5bj May 30 22:21:10.486: INFO: Got endpoints: latency-svc-6d5bj [958.074453ms] May 30 22:21:10.509: INFO: Created: latency-svc-g84n6 May 30 22:21:10.578: INFO: Got endpoints: latency-svc-g84n6 [1.008427138s] May 30 22:21:10.603: INFO: Created: latency-svc-zctpv May 30 22:21:10.612: INFO: Got endpoints: latency-svc-zctpv [1.005204375s] May 30 22:21:10.633: INFO: Created: latency-svc-5x4qx May 30 22:21:10.644: INFO: Got endpoints: latency-svc-5x4qx [960.876708ms] May 30 22:21:10.664: INFO: Created: latency-svc-xs2sk May 30 22:21:10.716: INFO: Got endpoints: latency-svc-xs2sk [1.002442267s] May 30 22:21:10.724: INFO: Created: latency-svc-7v24r May 30 22:21:10.739: INFO: Got endpoints: latency-svc-7v24r [989.235464ms] May 30 22:21:10.771: INFO: Created: latency-svc-r7g6z May 30 22:21:10.788: INFO: Got endpoints: latency-svc-r7g6z [967.359793ms] May 30 22:21:10.860: INFO: Created: latency-svc-7652p May 30 22:21:10.902: INFO: Got endpoints: latency-svc-7652p [1.055819712s] May 30 22:21:10.940: INFO: Created: latency-svc-6bssp May 30 22:21:10.957: INFO: Got endpoints: latency-svc-6bssp [1.08036515s] May 30 22:21:11.010: INFO: Created: latency-svc-fjv85 May 30 22:21:11.017: INFO: Got endpoints: latency-svc-fjv85 [1.012274616s] May 30 22:21:11.041: INFO: Created: latency-svc-mltmv May 30 22:21:11.060: INFO: Got endpoints: latency-svc-mltmv [996.261283ms] May 30 22:21:11.148: INFO: Created: latency-svc-cl2ms May 30 22:21:11.155: INFO: Got endpoints: latency-svc-cl2ms [1.028179135s] May 30 22:21:11.180: INFO: Created: latency-svc-fjphr May 30 22:21:11.198: INFO: Got endpoints: latency-svc-fjphr [976.836614ms] May 30 22:21:11.226: INFO: Created: latency-svc-xxshm May 30 22:21:11.240: INFO: Got endpoints: latency-svc-xxshm [917.822768ms] May 30 22:21:11.321: INFO: Created: latency-svc-msfww May 30 22:21:11.325: INFO: Got endpoints: latency-svc-msfww [881.314956ms] May 30 22:21:11.365: INFO: Created: latency-svc-nndw7 May 30 22:21:11.389: INFO: Got endpoints: latency-svc-nndw7 [903.485813ms] May 30 22:21:11.452: INFO: Created: latency-svc-szpjv May 30 22:21:11.469: INFO: Got endpoints: latency-svc-szpjv [890.806509ms] May 30 22:21:11.494: INFO: Created: latency-svc-776xl May 30 22:21:11.512: INFO: Got endpoints: latency-svc-776xl [900.336961ms] May 30 22:21:11.620: INFO: Created: latency-svc-txn75 May 30 22:21:11.627: INFO: Got endpoints: latency-svc-txn75 [983.231995ms] May 30 22:21:11.628: INFO: Created: latency-svc-bhkrm May 30 22:21:11.661: INFO: Created: latency-svc-lz69f May 30 22:21:11.661: INFO: Got endpoints: latency-svc-bhkrm [945.219968ms] May 30 22:21:11.681: INFO: Got endpoints: latency-svc-lz69f [942.048224ms] May 30 22:21:11.720: INFO: Created: latency-svc-6794d May 30 22:21:11.782: INFO: Got endpoints: latency-svc-6794d [994.34405ms] May 30 22:21:11.810: INFO: Created: latency-svc-64nkz May 30 22:21:11.834: INFO: Got endpoints: latency-svc-64nkz [931.789836ms] May 30 22:21:11.876: INFO: Created: latency-svc-2gkfg May 30 22:21:11.949: INFO: Got endpoints: latency-svc-2gkfg [992.331154ms] May 30 22:21:11.951: INFO: Created: latency-svc-46t4w May 30 22:21:11.958: INFO: Got endpoints: latency-svc-46t4w [941.000795ms] May 30 22:21:11.978: INFO: Created: latency-svc-hfnl4 May 30 22:21:11.989: INFO: Got endpoints: latency-svc-hfnl4 [929.447032ms] May 30 22:21:12.120: INFO: Created: latency-svc-cpm9m May 30 22:21:12.140: INFO: Got endpoints: latency-svc-cpm9m [984.907631ms] May 30 22:21:12.176: INFO: Created: latency-svc-x96pf May 30 22:21:12.194: INFO: Got endpoints: latency-svc-x96pf [995.401039ms] May 30 22:21:12.217: INFO: Created: latency-svc-x2fkf May 30 22:21:12.279: INFO: Got endpoints: latency-svc-x2fkf [1.038492568s] May 30 22:21:12.284: INFO: Created: latency-svc-6nzs7 May 30 22:21:12.308: INFO: Got endpoints: latency-svc-6nzs7 [983.742614ms] May 30 22:21:12.344: INFO: Created: latency-svc-v9ldr May 30 22:21:12.363: INFO: Got endpoints: latency-svc-v9ldr [974.145013ms] May 30 22:21:12.409: INFO: Created: latency-svc-j9dbm May 30 22:21:12.435: INFO: Got endpoints: latency-svc-j9dbm [965.865395ms] May 30 22:21:12.458: INFO: Created: latency-svc-xghxt May 30 22:21:12.471: INFO: Got endpoints: latency-svc-xghxt [958.984709ms] May 30 22:21:12.555: INFO: Created: latency-svc-twx4g May 30 22:21:12.573: INFO: Got endpoints: latency-svc-twx4g [946.23289ms] May 30 22:21:12.615: INFO: Created: latency-svc-pts9q May 30 22:21:12.628: INFO: Got endpoints: latency-svc-pts9q [967.191872ms] May 30 22:21:12.686: INFO: Created: latency-svc-lkgj5 May 30 22:21:12.715: INFO: Got endpoints: latency-svc-lkgj5 [1.033976139s] May 30 22:21:12.716: INFO: Created: latency-svc-xbhh5 May 30 22:21:12.740: INFO: Got endpoints: latency-svc-xbhh5 [957.602866ms] May 30 22:21:12.776: INFO: Created: latency-svc-sq4js May 30 22:21:12.841: INFO: Got endpoints: latency-svc-sq4js [1.00708761s] May 30 22:21:12.844: INFO: Created: latency-svc-xbr9h May 30 22:21:12.864: INFO: Got endpoints: latency-svc-xbr9h [914.379233ms] May 30 22:21:12.901: INFO: Created: latency-svc-kf74m May 30 22:21:12.912: INFO: Got endpoints: latency-svc-kf74m [953.827065ms] May 30 22:21:12.987: INFO: Created: latency-svc-rxvhb May 30 22:21:12.996: INFO: Got endpoints: latency-svc-rxvhb [1.007201727s] May 30 22:21:13.022: INFO: Created: latency-svc-268gh May 30 22:21:13.033: INFO: Got endpoints: latency-svc-268gh [892.196993ms] May 30 22:21:13.057: INFO: Created: latency-svc-v5ccj May 30 22:21:13.081: INFO: Got endpoints: latency-svc-v5ccj [887.419767ms] May 30 22:21:13.141: INFO: Created: latency-svc-zn48d May 30 22:21:13.144: INFO: Got endpoints: latency-svc-zn48d [865.052563ms] May 30 22:21:13.175: INFO: Created: latency-svc-zqbg8 May 30 22:21:13.189: INFO: Got endpoints: latency-svc-zqbg8 [880.504998ms] May 30 22:21:13.214: INFO: Created: latency-svc-6wh7x May 30 22:21:13.226: INFO: Got endpoints: latency-svc-6wh7x [862.047447ms] May 30 22:21:13.279: INFO: Created: latency-svc-g724b May 30 22:21:13.296: INFO: Got endpoints: latency-svc-g724b [861.165384ms] May 30 22:21:13.333: INFO: Created: latency-svc-hnglk May 30 22:21:13.352: INFO: Got endpoints: latency-svc-hnglk [880.936116ms] May 30 22:21:13.376: INFO: Created: latency-svc-6ht8z May 30 22:21:13.411: INFO: Got endpoints: latency-svc-6ht8z [838.136596ms] May 30 22:21:13.461: INFO: Created: latency-svc-5xwzk May 30 22:21:13.473: INFO: Got endpoints: latency-svc-5xwzk [845.019357ms] May 30 22:21:13.501: INFO: Created: latency-svc-bptfn May 30 22:21:13.543: INFO: Got endpoints: latency-svc-bptfn [827.617063ms] May 30 22:21:13.572: INFO: Created: latency-svc-gqls5 May 30 22:21:13.589: INFO: Got endpoints: latency-svc-gqls5 [849.44667ms] May 30 22:21:13.615: INFO: Created: latency-svc-z745n May 30 22:21:13.630: INFO: Got endpoints: latency-svc-z745n [788.878061ms] May 30 22:21:13.680: INFO: Created: latency-svc-nwxcm May 30 22:21:13.685: INFO: Got endpoints: latency-svc-nwxcm [821.017347ms] May 30 22:21:13.712: INFO: Created: latency-svc-cr2gq May 30 22:21:13.734: INFO: Got endpoints: latency-svc-cr2gq [821.824686ms] May 30 22:21:13.753: INFO: Created: latency-svc-bls64 May 30 22:21:13.770: INFO: Got endpoints: latency-svc-bls64 [773.212014ms] May 30 22:21:13.812: INFO: Created: latency-svc-csxrk May 30 22:21:13.818: INFO: Got endpoints: latency-svc-csxrk [785.639932ms] May 30 22:21:13.862: INFO: Created: latency-svc-s4z6j May 30 22:21:13.886: INFO: Got endpoints: latency-svc-s4z6j [804.560976ms] May 30 22:21:13.986: INFO: Created: latency-svc-hfmdt May 30 22:21:13.989: INFO: Got endpoints: latency-svc-hfmdt [844.75416ms] May 30 22:21:14.041: INFO: Created: latency-svc-bz8q8 May 30 22:21:14.060: INFO: Got endpoints: latency-svc-bz8q8 [870.82463ms] May 30 22:21:14.129: INFO: Created: latency-svc-k5qj6 May 30 22:21:14.145: INFO: Got endpoints: latency-svc-k5qj6 [919.184646ms] May 30 22:21:14.192: INFO: Created: latency-svc-4ggbg May 30 22:21:14.220: INFO: Got endpoints: latency-svc-4ggbg [923.945385ms] May 30 22:21:14.286: INFO: Created: latency-svc-sfkl6 May 30 22:21:14.289: INFO: Got endpoints: latency-svc-sfkl6 [936.973664ms] May 30 22:21:14.327: INFO: Created: latency-svc-wcm89 May 30 22:21:14.343: INFO: Got endpoints: latency-svc-wcm89 [931.655634ms] May 30 22:21:14.428: INFO: Created: latency-svc-khc8z May 30 22:21:14.449: INFO: Got endpoints: latency-svc-khc8z [975.50732ms] May 30 22:21:14.503: INFO: Created: latency-svc-8g7dq May 30 22:21:14.560: INFO: Got endpoints: latency-svc-8g7dq [1.017442612s] May 30 22:21:14.588: INFO: Created: latency-svc-8vfhd May 30 22:21:14.608: INFO: Got endpoints: latency-svc-8vfhd [1.018710562s] May 30 22:21:14.640: INFO: Created: latency-svc-gjjnx May 30 22:21:14.657: INFO: Got endpoints: latency-svc-gjjnx [1.026650296s] May 30 22:21:14.705: INFO: Created: latency-svc-ntfb6 May 30 22:21:14.707: INFO: Got endpoints: latency-svc-ntfb6 [1.021980138s] May 30 22:21:14.738: INFO: Created: latency-svc-7xpgr May 30 22:21:14.772: INFO: Got endpoints: latency-svc-7xpgr [1.038256533s] May 30 22:21:14.847: INFO: Created: latency-svc-qxrss May 30 22:21:14.850: INFO: Got endpoints: latency-svc-qxrss [1.080220979s] May 30 22:21:14.947: INFO: Created: latency-svc-5bzgp May 30 22:21:15.003: INFO: Got endpoints: latency-svc-5bzgp [1.18464273s] May 30 22:21:15.043: INFO: Created: latency-svc-bk4pv May 30 22:21:15.061: INFO: Got endpoints: latency-svc-bk4pv [1.175489966s] May 30 22:21:15.091: INFO: Created: latency-svc-px762 May 30 22:21:15.135: INFO: Got endpoints: latency-svc-px762 [1.146383678s] May 30 22:21:15.163: INFO: Created: latency-svc-6xfvv May 30 22:21:15.476: INFO: Got endpoints: latency-svc-6xfvv [1.416266731s] May 30 22:21:15.633: INFO: Created: latency-svc-z2cqz May 30 22:21:15.637: INFO: Got endpoints: latency-svc-z2cqz [1.49208734s] May 30 22:21:15.673: INFO: Created: latency-svc-7bdfp May 30 22:21:15.692: INFO: Got endpoints: latency-svc-7bdfp [1.471107282s] May 30 22:21:15.721: INFO: Created: latency-svc-hcbk7 May 30 22:21:15.788: INFO: Got endpoints: latency-svc-hcbk7 [1.498564888s] May 30 22:21:15.818: INFO: Created: latency-svc-vd4cc May 30 22:21:15.830: INFO: Got endpoints: latency-svc-vd4cc [1.487241657s] May 30 22:21:16.094: INFO: Created: latency-svc-njrjc May 30 22:21:16.107: INFO: Got endpoints: latency-svc-njrjc [1.658162933s] May 30 22:21:16.154: INFO: Created: latency-svc-8v76n May 30 22:21:16.155: INFO: Got endpoints: latency-svc-8v76n [1.59472971s] May 30 22:21:16.237: INFO: Created: latency-svc-v7jg4 May 30 22:21:16.246: INFO: Got endpoints: latency-svc-v7jg4 [1.637564413s] May 30 22:21:16.274: INFO: Created: latency-svc-7sz6q May 30 22:21:16.288: INFO: Got endpoints: latency-svc-7sz6q [1.630541517s] May 30 22:21:16.326: INFO: Created: latency-svc-84cqn May 30 22:21:16.417: INFO: Got endpoints: latency-svc-84cqn [1.710533912s] May 30 22:21:16.465: INFO: Created: latency-svc-phjbc May 30 22:21:16.495: INFO: Got endpoints: latency-svc-phjbc [1.722933807s] May 30 22:21:16.549: INFO: Created: latency-svc-g5cbg May 30 22:21:16.567: INFO: Got endpoints: latency-svc-g5cbg [1.717333284s] May 30 22:21:16.623: INFO: Created: latency-svc-tdfw9 May 30 22:21:16.692: INFO: Got endpoints: latency-svc-tdfw9 [1.689033888s] May 30 22:21:16.692: INFO: Latencies: [109.054068ms 150.187428ms 238.360714ms 306.745266ms 373.196967ms 430.484659ms 481.465574ms 553.547855ms 638.427948ms 771.91774ms 773.212014ms 785.639932ms 788.878061ms 789.059566ms 804.560976ms 812.144793ms 821.017347ms 821.824686ms 827.617063ms 831.498054ms 838.136596ms 844.75416ms 845.019357ms 849.44667ms 854.592059ms 861.165384ms 862.047447ms 864.561715ms 865.052563ms 870.82463ms 871.949931ms 874.404128ms 879.914146ms 880.504998ms 880.936116ms 881.314956ms 881.680668ms 882.627367ms 887.419767ms 890.806509ms 892.196993ms 900.336961ms 903.485813ms 914.379233ms 917.822768ms 919.184646ms 923.558827ms 923.945385ms 927.974201ms 929.447032ms 931.655634ms 931.789836ms 934.672213ms 936.973664ms 941.000795ms 942.048224ms 945.219968ms 946.23289ms 950.218384ms 953.486682ms 953.827065ms 957.602866ms 958.074453ms 958.984709ms 959.566102ms 960.876708ms 965.865395ms 966.093426ms 967.191872ms 967.359793ms 971.900862ms 972.595442ms 974.145013ms 974.752326ms 975.50732ms 976.836614ms 977.293799ms 979.665128ms 981.491922ms 983.088239ms 983.231995ms 983.742614ms 983.954912ms 984.907631ms 989.235464ms 990.303902ms 992.331154ms 994.34405ms 995.401039ms 996.261283ms 997.400458ms 1.00230907s 1.002442267s 1.005204375s 1.00708761s 1.007201727s 1.008427138s 1.009029499s 1.012274616s 1.013389639s 1.017442612s 1.018710562s 1.021980138s 1.026650296s 1.028179135s 1.033976139s 1.035744001s 1.038256533s 1.038492568s 1.047425774s 1.054038354s 1.055819712s 1.058882424s 1.063153503s 1.065694742s 1.077031059s 1.080220979s 1.08036515s 1.084535329s 1.10073011s 1.102813621s 1.103085665s 1.103464446s 1.103479959s 1.104104131s 1.104177917s 1.10452163s 1.105432791s 1.109885694s 1.1099411s 1.109955405s 1.116354448s 1.120651895s 1.131481113s 1.133721132s 1.136900917s 1.137194113s 1.140025542s 1.141628354s 1.146171162s 1.146383678s 1.146660047s 1.151010908s 1.151369033s 1.152279559s 1.155285102s 1.156561932s 1.156569723s 1.157158776s 1.157237564s 1.157541012s 1.157561894s 1.157762253s 1.159223697s 1.165347735s 1.169122772s 1.170107474s 1.173671971s 1.174984538s 1.175021993s 1.175050635s 1.175489966s 1.178927535s 1.182085364s 1.182747517s 1.183819785s 1.184418602s 1.18464273s 1.184749598s 1.186369819s 1.188634602s 1.189740755s 1.193240436s 1.194615066s 1.205284759s 1.2069944s 1.210645713s 1.211925288s 1.216777461s 1.217692066s 1.221260343s 1.22248715s 1.226783775s 1.237260667s 1.240758219s 1.241488235s 1.242014924s 1.416266731s 1.471107282s 1.487241657s 1.49208734s 1.498564888s 1.59472971s 1.630541517s 1.637564413s 1.658162933s 1.689033888s 1.710533912s 1.717333284s 1.722933807s] May 30 22:21:16.693: INFO: 50 %ile: 1.017442612s May 30 22:21:16.693: INFO: 90 %ile: 1.221260343s May 30 22:21:16.693: INFO: 99 %ile: 1.717333284s May 30 22:21:16.693: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:21:16.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6224" for this suite. • [SLOW TEST:17.883 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":254,"skipped":4123,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:21:16.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-77fc9206-9cc7-4bb1-896e-b2b1a8407c80 STEP: Creating a pod to test consume secrets May 30 22:21:16.842: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a25c3189-6bb0-426a-a9b7-03999a2d7571" in namespace "projected-6060" to be "success or failure" May 30 22:21:16.844: INFO: Pod "pod-projected-secrets-a25c3189-6bb0-426a-a9b7-03999a2d7571": Phase="Pending", Reason="", readiness=false. Elapsed: 2.277025ms May 30 22:21:18.850: INFO: Pod "pod-projected-secrets-a25c3189-6bb0-426a-a9b7-03999a2d7571": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008795905s May 30 22:21:20.884: INFO: Pod "pod-projected-secrets-a25c3189-6bb0-426a-a9b7-03999a2d7571": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042071308s STEP: Saw pod success May 30 22:21:20.884: INFO: Pod "pod-projected-secrets-a25c3189-6bb0-426a-a9b7-03999a2d7571" satisfied condition "success or failure" May 30 22:21:20.886: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-a25c3189-6bb0-426a-a9b7-03999a2d7571 container projected-secret-volume-test: STEP: delete the pod May 30 22:21:20.948: INFO: Waiting for pod pod-projected-secrets-a25c3189-6bb0-426a-a9b7-03999a2d7571 to disappear May 30 22:21:20.966: INFO: Pod pod-projected-secrets-a25c3189-6bb0-426a-a9b7-03999a2d7571 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:21:20.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6060" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4156,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:21:20.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 30 22:21:21.138: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9248 /api/v1/namespaces/watch-9248/configmaps/e2e-watch-test-resource-version e389e5ce-820d-4ba8-8ad1-a0542d0f0a59 20448214 0 2020-05-30 22:21:21 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 30 22:21:21.138: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9248 /api/v1/namespaces/watch-9248/configmaps/e2e-watch-test-resource-version e389e5ce-820d-4ba8-8ad1-a0542d0f0a59 20448215 0 2020-05-30 22:21:21 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:21:21.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9248" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":256,"skipped":4160,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:21:21.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 22:21:21.183: INFO: Creating deployment "webserver-deployment" May 30 22:21:21.187: INFO: Waiting for observed generation 1 May 30 22:21:23.311: INFO: Waiting for all required pods to come up May 30 22:21:23.327: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 30 22:21:33.482: INFO: Waiting for deployment "webserver-deployment" to complete May 30 22:21:33.500: INFO: Updating deployment "webserver-deployment" with a non-existent image May 30 22:21:33.537: INFO: Updating deployment webserver-deployment May 30 22:21:33.537: INFO: Waiting for observed generation 2 May 30 22:21:35.807: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 30 22:21:35.998: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 30 22:21:36.274: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 30 22:21:36.429: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 30 22:21:36.429: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 30 22:21:36.483: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 30 22:21:36.753: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 30 22:21:36.753: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 30 22:21:36.806: INFO: Updating deployment webserver-deployment May 30 22:21:36.806: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 30 22:21:37.087: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 30 22:21:39.962: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 30 22:21:40.736: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-4925 /apis/apps/v1/namespaces/deployment-4925/deployments/webserver-deployment 62373192-803b-4195-b69f-f86093ec07a9 20448905 3 2020-05-30 22:21:21 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001063c98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-30 22:21:36 +0000 UTC,LastTransitionTime:2020-05-30 22:21:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-05-30 22:21:37 +0000 UTC,LastTransitionTime:2020-05-30 22:21:21 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 30 22:21:40.943: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-4925 /apis/apps/v1/namespaces/deployment-4925/replicasets/webserver-deployment-c7997dcc8 4af9e4c0-d2a8-4450-b290-f69ee6f2626d 20448892 3 2020-05-30 22:21:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 62373192-803b-4195-b69f-f86093ec07a9 0xc002ce61a7 0xc002ce61a8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ce6218 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 30 22:21:40.943: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 30 22:21:40.944: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-4925 /apis/apps/v1/namespaces/deployment-4925/replicasets/webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 20448904 3 2020-05-30 22:21:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 62373192-803b-4195-b69f-f86093ec07a9 0xc002ce60e7 0xc002ce60e8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ce6148 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 30 22:21:41.339: INFO: Pod "webserver-deployment-595b5b9587-28t58" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-28t58 webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-28t58 acb37679-d407-4196-bfe0-b30dfd918f93 20448909 0 2020-05-30 22:21:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc002dd7b77 0xc002dd7b78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-30 22:21:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.339: INFO: Pod "webserver-deployment-595b5b9587-2fchd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2fchd webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-2fchd 2ec4c567-4c13-47c3-b972-94ca8f3935d0 20448907 0 2020-05-30 22:21:37 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc002dd7cd7 0xc002dd7cd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-30 22:21:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.340: INFO: Pod "webserver-deployment-595b5b9587-55bgr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-55bgr webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-55bgr 84b19696-caa3-4efd-9060-db2362e95b6c 20448948 0 2020-05-30 22:21:37 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc002dd7e57 0xc002dd7e58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-30 22:21:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.340: INFO: Pod "webserver-deployment-595b5b9587-56q2f" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-56q2f webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-56q2f e91486e0-9e17-4d1a-8c5b-95e67eef618b 20448966 0 2020-05-30 22:21:37 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc00397a007 0xc00397a008}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-30 22:21:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.340: INFO: Pod "webserver-deployment-595b5b9587-8d22v" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8d22v webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-8d22v a8e93eeb-e14e-4288-a57e-b36e8758dc79 20448900 0 2020-05-30 22:21:37 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc00397a167 0xc00397a168}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-30 22:21:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.341: INFO: Pod "webserver-deployment-595b5b9587-99hrd" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-99hrd webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-99hrd cff309b7-b25d-4f6b-844f-68c86c65c9bc 20448484 0 2020-05-30 22:21:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc00397a2c7 0xc00397a2c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.145,StartTime:2020-05-30 22:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 22:21:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9601bec7d920c182665ea7802b4f2291c31c0d22ee618e215631df6a36c60e83,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.145,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.341: INFO: Pod "webserver-deployment-595b5b9587-clnvd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-clnvd webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-clnvd e930d122-242c-4f36-a868-c54eeffaabf8 20448924 0 2020-05-30 22:21:37 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc00397a447 0xc00397a448}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-30 22:21:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.341: INFO: Pod "webserver-deployment-595b5b9587-fpgzj" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fpgzj webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-fpgzj 0cf3e99d-09c3-4e68-a0f8-386fb3633ca1 20448623 0 2020-05-30 22:21:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc00397a5a7 0xc00397a5a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.147,StartTime:2020-05-30 22:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 22:21:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3c266d112f87340dd11a4f4352043631cde69cd91f5355e9cdc9090c11c5a2c6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.147,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.342: INFO: Pod "webserver-deployment-595b5b9587-gqfdj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gqfdj webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-gqfdj 1b065e46-6fc0-48a7-a6d6-5458a56cc9ad 20448885 0 2020-05-30 22:21:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc00397a727 0xc00397a728}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-30 22:21:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.342: INFO: Pod "webserver-deployment-595b5b9587-hzfbv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hzfbv webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-hzfbv bdaa6bcb-29fe-408e-b83b-4bef041f7dc1 20448936 0 2020-05-30 22:21:37 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc00397a897 0xc00397a898}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-30 22:21:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.342: INFO: Pod "webserver-deployment-595b5b9587-jw52n" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jw52n webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-jw52n dc9155a7-b6cf-42ec-9e68-55f55516fcaf 20448941 0 2020-05-30 22:21:37 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc00397a9f7 0xc00397a9f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-30 22:21:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.343: INFO: Pod "webserver-deployment-595b5b9587-kll74" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kll74 webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-kll74 431594e0-b646-47eb-b70c-d7a54d0be7a7 20448500 0 2020-05-30 22:21:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc00397ab57 0xc00397ab58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.93,StartTime:2020-05-30 22:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 22:21:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a2d7c837129add17b5bfdb956acd17a509c32fe9040cf513f25f4564da6cee3d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.343: INFO: Pod "webserver-deployment-595b5b9587-rbl4d" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rbl4d webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-rbl4d 3d3409c8-3231-4cb2-91d7-b6f945347864 20448925 0 2020-05-30 22:21:37 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc00397acd7 0xc00397acd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-30 22:21:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.343: INFO: Pod "webserver-deployment-595b5b9587-rh45d" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rh45d webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-rh45d 8a6ee85c-23db-4384-b702-eb087304f3b2 20448568 0 2020-05-30 22:21:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc00397ae37 0xc00397ae38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.96,StartTime:2020-05-30 22:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 22:21:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7735924ce357fdf73ef1ca4cd5c5177eaf6ea90989443aac7bd3a6361aabbc8a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.96,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.343: INFO: Pod "webserver-deployment-595b5b9587-t9c5v" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-t9c5v webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-t9c5v 82b60090-687c-4f78-a06e-fef7bdfb7375 20448919 0 2020-05-30 22:21:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc00397afb7 0xc00397afb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-30 22:21:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.343: INFO: Pod "webserver-deployment-595b5b9587-tkgrc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tkgrc webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-tkgrc 08b91725-0e79-4b2b-8c6d-dc65ebf2fc95 20448979 0 2020-05-30 22:21:37 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc00397b117 0xc00397b118}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-30 22:21:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.344: INFO: Pod "webserver-deployment-595b5b9587-txslh" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-txslh webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-txslh 4d8782ea-0ce6-43c8-94d7-8f85c4871776 20448512 0 2020-05-30 22:21:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc00397b277 0xc00397b278}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.143,StartTime:2020-05-30 22:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 22:21:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a97d7915b3720554c433eda70146bfa8eefc459b88a4cbcfac63096492d26f50,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.143,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.344: INFO: Pod "webserver-deployment-595b5b9587-w9jt8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-w9jt8 webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-w9jt8 6b9166fb-52a7-493d-ba5f-47143cea270d 20448619 0 2020-05-30 22:21:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc00397b407 0xc00397b408}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.146,StartTime:2020-05-30 22:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 22:21:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://40991028dbb837abcf6bfc1caf350b0efc8e60e0164cdadc06d734c7b23d96a8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.146,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.344: INFO: Pod "webserver-deployment-595b5b9587-wj4jr" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wj4jr webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-wj4jr 85dba400-7d5f-418b-9630-3523eac7045f 20448550 0 2020-05-30 22:21:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc00397b587 0xc00397b588}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.144,StartTime:2020-05-30 22:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 22:21:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ec1d4f8dae3023ac95359c4014703354dec0aed6f3c186239f30f032e771b0f5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.144,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.344: INFO: Pod "webserver-deployment-595b5b9587-z5qwv" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-z5qwv webserver-deployment-595b5b9587- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-595b5b9587-z5qwv c78f2350-d0f9-415d-aa19-516fb86bd4a0 20448548 0 2020-05-30 22:21:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5d6948b9-1044-422a-9415-4e4219c5cd65 0xc00397b707 0xc00397b708}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.94,StartTime:2020-05-30 22:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 22:21:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0913a9c3f566c58da0b8fe18d02b0d4348142fbdc975ef1cbb9f6a614d043e82,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.94,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.344: INFO: Pod "webserver-deployment-c7997dcc8-22f6n" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-22f6n webserver-deployment-c7997dcc8- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-c7997dcc8-22f6n 7851d490-b100-4e4a-925d-b12e0d4f22b6 20448781 0 2020-05-30 22:21:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4af9e4c0-d2a8-4450-b290-f69ee6f2626d 0xc00397b887 0xc00397b888}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-30 22:21:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.345: INFO: Pod "webserver-deployment-c7997dcc8-8gs46" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8gs46 webserver-deployment-c7997dcc8- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-c7997dcc8-8gs46 9db4ace3-d466-480f-a838-2195012f829f 20448937 0 2020-05-30 22:21:37 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4af9e4c0-d2a8-4450-b290-f69ee6f2626d 0xc00397ba07 0xc00397ba08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-30 22:21:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.345: INFO: Pod "webserver-deployment-c7997dcc8-dgt4f" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dgt4f webserver-deployment-c7997dcc8- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-c7997dcc8-dgt4f 230887a2-ad6b-481e-9531-67f908b15d99 20448951 0 2020-05-30 22:21:37 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4af9e4c0-d2a8-4450-b290-f69ee6f2626d 0xc00397bb87 0xc00397bb88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-30 22:21:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.345: INFO: Pod "webserver-deployment-c7997dcc8-grclb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-grclb webserver-deployment-c7997dcc8- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-c7997dcc8-grclb 73b0aed2-69a2-4c0b-88b6-626ce022a3eb 20448974 0 2020-05-30 22:21:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4af9e4c0-d2a8-4450-b290-f69ee6f2626d 0xc00397bd07 0xc00397bd08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.149,StartTime:2020-05-30 22:21:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.149,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.345: INFO: Pod "webserver-deployment-c7997dcc8-lf5zq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lf5zq webserver-deployment-c7997dcc8- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-c7997dcc8-lf5zq d68a48ac-976f-42a1-b7a0-b9d9c2a9ddd4 20448912 0 2020-05-30 22:21:37 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4af9e4c0-d2a8-4450-b290-f69ee6f2626d 0xc00397beb7 0xc00397beb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-30 22:21:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.345: INFO: Pod "webserver-deployment-c7997dcc8-m5jw2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-m5jw2 webserver-deployment-c7997dcc8- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-c7997dcc8-m5jw2 b9981236-4d71-48be-bc99-1a28ce403ae5 20448790 0 2020-05-30 22:21:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4af9e4c0-d2a8-4450-b290-f69ee6f2626d 0xc002c50057 0xc002c50058}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-30 22:21:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.346: INFO: Pod "webserver-deployment-c7997dcc8-n5jbl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-n5jbl webserver-deployment-c7997dcc8- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-c7997dcc8-n5jbl 443996a7-7253-4d8b-881e-c4bf10717065 20448969 0 2020-05-30 22:21:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4af9e4c0-d2a8-4450-b290-f69ee6f2626d 0xc002c501e7 0xc002c501e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.148,StartTime:2020-05-30 22:21:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.148,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.346: INFO: Pod "webserver-deployment-c7997dcc8-pcjf7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pcjf7 webserver-deployment-c7997dcc8- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-c7997dcc8-pcjf7 5091af9b-2ad7-4ca0-8965-aec5ba8a21a6 20448958 0 2020-05-30 22:21:37 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4af9e4c0-d2a8-4450-b290-f69ee6f2626d 0xc002c50397 0xc002c50398}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-30 22:21:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.346: INFO: Pod "webserver-deployment-c7997dcc8-qn8bv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qn8bv webserver-deployment-c7997dcc8- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-c7997dcc8-qn8bv 69c7f707-97b8-4edb-adaa-5717b719d461 20448880 0 2020-05-30 22:21:37 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4af9e4c0-d2a8-4450-b290-f69ee6f2626d 0xc002c505c7 0xc002c505c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.346: INFO: Pod "webserver-deployment-c7997dcc8-rgk8r" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rgk8r webserver-deployment-c7997dcc8- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-c7997dcc8-rgk8r 80d96bfd-a4b3-4fe9-ba43-0cb251253798 20448961 0 2020-05-30 22:21:37 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4af9e4c0-d2a8-4450-b290-f69ee6f2626d 0xc002c507f7 0xc002c507f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-30 22:21:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.346: INFO: Pod "webserver-deployment-c7997dcc8-rphn8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rphn8 webserver-deployment-c7997dcc8- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-c7997dcc8-rphn8 9903446d-44da-4104-8a5d-936283680011 20448918 0 2020-05-30 22:21:37 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4af9e4c0-d2a8-4450-b290-f69ee6f2626d 0xc002c50997 0xc002c50998}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-30 22:21:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.346: INFO: Pod "webserver-deployment-c7997dcc8-wfrpw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wfrpw webserver-deployment-c7997dcc8- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-c7997dcc8-wfrpw 7c04d20d-d0d2-48a6-9079-a9efd6176c9f 20448901 0 2020-05-30 22:21:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4af9e4c0-d2a8-4450-b290-f69ee6f2626d 0xc002c50b17 0xc002c50b18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-30 22:21:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 22:21:41.347: INFO: Pod "webserver-deployment-c7997dcc8-xfhkp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xfhkp webserver-deployment-c7997dcc8- deployment-4925 /api/v1/namespaces/deployment-4925/pods/webserver-deployment-c7997dcc8-xfhkp 6ff3931b-7bd6-4ad1-a69c-4b6d4cec032b 20448915 0 2020-05-30 22:21:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4af9e4c0-d2a8-4450-b290-f69ee6f2626d 0xc002c50c97 0xc002c50c98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t7jb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t7jb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t7jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 22:21:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.98,StartTime:2020-05-30 22:21:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.98,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:21:41.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4925" for this suite. • [SLOW TEST:20.821 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":257,"skipped":4171,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:21:41.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 22:21:45.670: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 22:21:49.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474105, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474105, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474106, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474105, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 22:21:51.637: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474105, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474105, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474106, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474105, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 22:21:53.412: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474105, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474105, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474106, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474105, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 22:21:55.413: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474105, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474105, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474106, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474105, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 22:21:57.376: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474105, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474105, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474106, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474105, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 22:21:59.083: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474105, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474105, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474106, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474105, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 22:22:02.933: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:22:04.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7681" for this suite. STEP: Destroying namespace "webhook-7681-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:25.748 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":258,"skipped":4188,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:22:07.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 30 22:22:09.963: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 30 22:22:12.304: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474129, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474129, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474130, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726474129, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 22:22:15.439: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 22:22:15.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:22:16.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2302" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:9.111 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":259,"skipped":4192,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:22:16.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-msbk STEP: Creating a pod to test atomic-volume-subpath May 30 22:22:17.478: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-msbk" in namespace "subpath-4805" to be "success or failure" May 30 22:22:17.534: INFO: Pod "pod-subpath-test-configmap-msbk": Phase="Pending", Reason="", readiness=false. Elapsed: 56.739257ms May 30 22:22:19.627: INFO: Pod "pod-subpath-test-configmap-msbk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149629866s May 30 22:22:21.645: INFO: Pod "pod-subpath-test-configmap-msbk": Phase="Running", Reason="", readiness=true. Elapsed: 4.167466306s May 30 22:22:23.699: INFO: Pod "pod-subpath-test-configmap-msbk": Phase="Running", Reason="", readiness=true. Elapsed: 6.221499378s May 30 22:22:25.718: INFO: Pod "pod-subpath-test-configmap-msbk": Phase="Running", Reason="", readiness=true. Elapsed: 8.240566991s May 30 22:22:27.736: INFO: Pod "pod-subpath-test-configmap-msbk": Phase="Running", Reason="", readiness=true. Elapsed: 10.258121619s May 30 22:22:29.740: INFO: Pod "pod-subpath-test-configmap-msbk": Phase="Running", Reason="", readiness=true. Elapsed: 12.262159336s May 30 22:22:31.754: INFO: Pod "pod-subpath-test-configmap-msbk": Phase="Running", Reason="", readiness=true. Elapsed: 14.276198259s May 30 22:22:33.757: INFO: Pod "pod-subpath-test-configmap-msbk": Phase="Running", Reason="", readiness=true. Elapsed: 16.279474528s May 30 22:22:35.861: INFO: Pod "pod-subpath-test-configmap-msbk": Phase="Running", Reason="", readiness=true. Elapsed: 18.383348644s May 30 22:22:37.865: INFO: Pod "pod-subpath-test-configmap-msbk": Phase="Running", Reason="", readiness=true. Elapsed: 20.387848872s May 30 22:22:39.870: INFO: Pod "pod-subpath-test-configmap-msbk": Phase="Running", Reason="", readiness=true. Elapsed: 22.392749287s May 30 22:22:41.875: INFO: Pod "pod-subpath-test-configmap-msbk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.397193879s STEP: Saw pod success May 30 22:22:41.875: INFO: Pod "pod-subpath-test-configmap-msbk" satisfied condition "success or failure" May 30 22:22:41.878: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-msbk container test-container-subpath-configmap-msbk: STEP: delete the pod May 30 22:22:41.955: INFO: Waiting for pod pod-subpath-test-configmap-msbk to disappear May 30 22:22:41.985: INFO: Pod pod-subpath-test-configmap-msbk no longer exists STEP: Deleting pod pod-subpath-test-configmap-msbk May 30 22:22:41.985: INFO: Deleting pod "pod-subpath-test-configmap-msbk" in namespace "subpath-4805" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:22:41.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4805" for this suite. • [SLOW TEST:25.177 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":260,"skipped":4242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:22:42.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 22:22:42.115: INFO: Creating ReplicaSet my-hostname-basic-6eb52114-de1c-4089-9cf8-2516ba205fd1 May 30 22:22:42.128: INFO: Pod name my-hostname-basic-6eb52114-de1c-4089-9cf8-2516ba205fd1: Found 0 pods out of 1 May 30 22:22:47.155: INFO: Pod name my-hostname-basic-6eb52114-de1c-4089-9cf8-2516ba205fd1: Found 1 pods out of 1 May 30 22:22:47.155: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-6eb52114-de1c-4089-9cf8-2516ba205fd1" is running May 30 22:22:47.177: INFO: Pod "my-hostname-basic-6eb52114-de1c-4089-9cf8-2516ba205fd1-nvcbz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-30 22:22:42 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-30 22:22:45 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-30 22:22:45 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-30 22:22:42 +0000 UTC Reason: Message:}]) May 30 22:22:47.177: INFO: Trying to dial the pod May 30 22:22:52.188: INFO: Controller my-hostname-basic-6eb52114-de1c-4089-9cf8-2516ba205fd1: Got expected result from replica 1 [my-hostname-basic-6eb52114-de1c-4089-9cf8-2516ba205fd1-nvcbz]: "my-hostname-basic-6eb52114-de1c-4089-9cf8-2516ba205fd1-nvcbz", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:22:52.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1770" for this suite. • [SLOW TEST:10.190 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":261,"skipped":4274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:22:52.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 22:22:52.453: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.270894ms) May 30 22:22:52.456: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.375503ms) May 30 22:22:52.460: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.384045ms) May 30 22:22:52.463: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.280836ms) May 30 22:22:52.466: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.461272ms) May 30 22:22:52.470: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.256295ms) May 30 22:22:52.473: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.281022ms) May 30 22:22:52.476: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.042822ms) May 30 22:22:52.480: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.439376ms) May 30 22:22:52.483: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.393348ms) May 30 22:22:52.487: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.674853ms) May 30 22:22:52.490: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.172702ms) May 30 22:22:52.493: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.186996ms) May 30 22:22:52.496: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.076371ms) May 30 22:22:52.499: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.13433ms) May 30 22:22:52.503: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.043906ms) May 30 22:22:52.506: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.455296ms) May 30 22:22:52.510: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.476848ms) May 30 22:22:52.513: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.192274ms) May 30 22:22:52.516: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.044748ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:22:52.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7450" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":262,"skipped":4313,"failed":0} S ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:22:52.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-7fac67a0-8873-4c44-adb3-f119e1586871 in namespace container-probe-1714 May 30 22:22:56.735: INFO: Started pod liveness-7fac67a0-8873-4c44-adb3-f119e1586871 in namespace container-probe-1714 STEP: checking the pod's current state and verifying that restartCount is present May 30 22:22:56.738: INFO: Initial restart count of pod liveness-7fac67a0-8873-4c44-adb3-f119e1586871 is 0 May 30 22:23:18.818: INFO: Restart count of pod container-probe-1714/liveness-7fac67a0-8873-4c44-adb3-f119e1586871 is now 1 (22.079559207s elapsed) May 30 22:23:38.866: INFO: Restart count of pod container-probe-1714/liveness-7fac67a0-8873-4c44-adb3-f119e1586871 is now 2 (42.127596799s elapsed) May 30 22:23:58.907: INFO: Restart count of pod container-probe-1714/liveness-7fac67a0-8873-4c44-adb3-f119e1586871 is now 3 (1m2.168537391s elapsed) May 30 22:24:18.947: INFO: Restart count of pod container-probe-1714/liveness-7fac67a0-8873-4c44-adb3-f119e1586871 is now 4 (1m22.208877403s elapsed) May 30 22:25:33.167: INFO: Restart count of pod container-probe-1714/liveness-7fac67a0-8873-4c44-adb3-f119e1586871 is now 5 (2m36.428281495s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:25:33.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1714" for this suite. • [SLOW TEST:160.687 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4314,"failed":0} SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:25:33.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-3499/configmap-test-3127a669-59ae-4e2c-9702-cfe39af0b765 STEP: Creating a pod to test consume configMaps May 30 22:25:33.294: INFO: Waiting up to 5m0s for pod "pod-configmaps-de5dbead-4649-44ac-93d2-7abaa4392158" in namespace "configmap-3499" to be "success or failure" May 30 22:25:33.300: INFO: Pod "pod-configmaps-de5dbead-4649-44ac-93d2-7abaa4392158": Phase="Pending", Reason="", readiness=false. Elapsed: 5.16503ms May 30 22:25:35.304: INFO: Pod "pod-configmaps-de5dbead-4649-44ac-93d2-7abaa4392158": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009489805s May 30 22:25:37.309: INFO: Pod "pod-configmaps-de5dbead-4649-44ac-93d2-7abaa4392158": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014256467s STEP: Saw pod success May 30 22:25:37.309: INFO: Pod "pod-configmaps-de5dbead-4649-44ac-93d2-7abaa4392158" satisfied condition "success or failure" May 30 22:25:37.312: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-de5dbead-4649-44ac-93d2-7abaa4392158 container env-test: STEP: delete the pod May 30 22:25:37.349: INFO: Waiting for pod pod-configmaps-de5dbead-4649-44ac-93d2-7abaa4392158 to disappear May 30 22:25:37.372: INFO: Pod pod-configmaps-de5dbead-4649-44ac-93d2-7abaa4392158 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:25:37.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3499" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4319,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:25:37.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 30 22:25:41.972: INFO: Successfully updated pod "annotationupdate720d392f-a157-49fc-8d2e-ed86e265fcbf" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:25:44.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-948" for this suite. • [SLOW TEST:6.646 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4328,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:25:44.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller May 30 22:25:44.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-954' May 30 22:25:49.315: INFO: stderr: "" May 30 22:25:49.315: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 30 22:25:49.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-954' May 30 22:25:49.469: INFO: stderr: "" May 30 22:25:49.469: INFO: stdout: "update-demo-nautilus-cwrls update-demo-nautilus-vsnr5 " May 30 22:25:49.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cwrls -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-954' May 30 22:25:49.585: INFO: stderr: "" May 30 22:25:49.585: INFO: stdout: "" May 30 22:25:49.585: INFO: update-demo-nautilus-cwrls is created but not running May 30 22:25:54.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-954' May 30 22:25:54.688: INFO: stderr: "" May 30 22:25:54.688: INFO: stdout: "update-demo-nautilus-cwrls update-demo-nautilus-vsnr5 " May 30 22:25:54.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cwrls -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-954' May 30 22:25:54.786: INFO: stderr: "" May 30 22:25:54.786: INFO: stdout: "true" May 30 22:25:54.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cwrls -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-954' May 30 22:25:54.870: INFO: stderr: "" May 30 22:25:54.870: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 30 22:25:54.870: INFO: validating pod update-demo-nautilus-cwrls May 30 22:25:54.889: INFO: got data: { "image": "nautilus.jpg" } May 30 22:25:54.889: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 30 22:25:54.889: INFO: update-demo-nautilus-cwrls is verified up and running May 30 22:25:54.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vsnr5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-954' May 30 22:25:54.978: INFO: stderr: "" May 30 22:25:54.978: INFO: stdout: "true" May 30 22:25:54.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vsnr5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-954' May 30 22:25:55.077: INFO: stderr: "" May 30 22:25:55.077: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 30 22:25:55.077: INFO: validating pod update-demo-nautilus-vsnr5 May 30 22:25:55.081: INFO: got data: { "image": "nautilus.jpg" } May 30 22:25:55.081: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 30 22:25:55.081: INFO: update-demo-nautilus-vsnr5 is verified up and running STEP: rolling-update to new replication controller May 30 22:25:55.085: INFO: scanned /root for discovery docs: May 30 22:25:55.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-954' May 30 22:26:20.173: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 30 22:26:20.173: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 30 22:26:20.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-954' May 30 22:26:20.265: INFO: stderr: "" May 30 22:26:20.265: INFO: stdout: "update-demo-kitten-h4clt update-demo-kitten-stzkr " May 30 22:26:20.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h4clt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-954' May 30 22:26:20.355: INFO: stderr: "" May 30 22:26:20.355: INFO: stdout: "true" May 30 22:26:20.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h4clt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-954' May 30 22:26:20.458: INFO: stderr: "" May 30 22:26:20.458: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 30 22:26:20.458: INFO: validating pod update-demo-kitten-h4clt May 30 22:26:20.486: INFO: got data: { "image": "kitten.jpg" } May 30 22:26:20.486: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 30 22:26:20.486: INFO: update-demo-kitten-h4clt is verified up and running May 30 22:26:20.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-stzkr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-954' May 30 22:26:20.587: INFO: stderr: "" May 30 22:26:20.587: INFO: stdout: "true" May 30 22:26:20.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-stzkr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-954' May 30 22:26:20.690: INFO: stderr: "" May 30 22:26:20.690: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 30 22:26:20.690: INFO: validating pod update-demo-kitten-stzkr May 30 22:26:20.701: INFO: got data: { "image": "kitten.jpg" } May 30 22:26:20.701: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 30 22:26:20.701: INFO: update-demo-kitten-stzkr is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:26:20.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-954" for this suite. • [SLOW TEST:36.682 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":266,"skipped":4388,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:26:20.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 30 22:26:20.750: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 30 22:26:20.771: INFO: Waiting for terminating namespaces to be deleted... May 30 22:26:20.773: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 30 22:26:20.808: INFO: update-demo-kitten-stzkr from kubectl-954 started at 2020-05-30 22:25:57 +0000 UTC (1 container statuses recorded) May 30 22:26:20.808: INFO: Container update-demo ready: true, restart count 0 May 30 22:26:20.808: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 22:26:20.808: INFO: Container kube-proxy ready: true, restart count 0 May 30 22:26:20.808: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 22:26:20.808: INFO: Container kindnet-cni ready: true, restart count 2 May 30 22:26:20.808: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 30 22:26:20.815: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 30 22:26:20.815: INFO: Container kube-hunter ready: false, restart count 0 May 30 22:26:20.815: INFO: update-demo-kitten-h4clt from kubectl-954 started at 2020-05-30 22:26:04 +0000 UTC (1 container statuses recorded) May 30 22:26:20.815: INFO: Container update-demo ready: true, restart count 0 May 30 22:26:20.815: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 22:26:20.815: INFO: Container kindnet-cni ready: true, restart count 2 May 30 22:26:20.815: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 30 22:26:20.815: INFO: Container kube-bench ready: false, restart count 0 May 30 22:26:20.815: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 30 22:26:20.815: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 May 30 22:26:20.890: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker May 30 22:26:20.890: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 May 30 22:26:20.890: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker May 30 22:26:20.890: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 May 30 22:26:20.890: INFO: Pod update-demo-kitten-h4clt requesting resource cpu=0m on Node jerma-worker2 May 30 22:26:20.890: INFO: Pod update-demo-kitten-stzkr requesting resource cpu=0m on Node jerma-worker STEP: Starting Pods to consume most of the cluster CPU. May 30 22:26:20.890: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker May 30 22:26:20.939: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-6bac5168-6a36-4116-9248-d1a4efe3cfc8.1613eebb25f5adda], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1905/filler-pod-6bac5168-6a36-4116-9248-d1a4efe3cfc8 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-6bac5168-6a36-4116-9248-d1a4efe3cfc8.1613eebbc17dd39a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-6bac5168-6a36-4116-9248-d1a4efe3cfc8.1613eebc06b4ee8f], Reason = [Created], Message = [Created container filler-pod-6bac5168-6a36-4116-9248-d1a4efe3cfc8] STEP: Considering event: Type = [Normal], Name = [filler-pod-6bac5168-6a36-4116-9248-d1a4efe3cfc8.1613eebc15ec707d], Reason = [Started], Message = [Started container filler-pod-6bac5168-6a36-4116-9248-d1a4efe3cfc8] STEP: Considering event: Type = [Normal], Name = [filler-pod-9b434a9f-e85a-4ddb-a4d0-062f04f4b2fe.1613eebb264804b0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1905/filler-pod-9b434a9f-e85a-4ddb-a4d0-062f04f4b2fe to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-9b434a9f-e85a-4ddb-a4d0-062f04f4b2fe.1613eebb70fecdaf], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9b434a9f-e85a-4ddb-a4d0-062f04f4b2fe.1613eebbd21bfd07], Reason = [Created], Message = [Created container filler-pod-9b434a9f-e85a-4ddb-a4d0-062f04f4b2fe] STEP: Considering event: Type = [Normal], Name = [filler-pod-9b434a9f-e85a-4ddb-a4d0-062f04f4b2fe.1613eebbeafbd157], Reason = [Started], Message = [Started container filler-pod-9b434a9f-e85a-4ddb-a4d0-062f04f4b2fe] STEP: Considering event: Type = [Warning], Name = [additional-pod.1613eebc9927e747], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.1613eebc9ce4e6cc], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:26:28.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1905" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:7.549 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":267,"skipped":4393,"failed":0} SS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:26:28.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:26:28.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-7828" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":268,"skipped":4395,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:26:28.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-2800/secret-test-6a3b8740-68a0-4d44-82cc-94547490aa58 STEP: Creating a pod to test consume secrets May 30 22:26:28.571: INFO: Waiting up to 5m0s for pod "pod-configmaps-90f8c4bf-5c69-4f51-93bc-482caa7cce09" in namespace "secrets-2800" to be "success or failure" May 30 22:26:28.585: INFO: Pod "pod-configmaps-90f8c4bf-5c69-4f51-93bc-482caa7cce09": Phase="Pending", Reason="", readiness=false. Elapsed: 14.687692ms May 30 22:26:30.590: INFO: Pod "pod-configmaps-90f8c4bf-5c69-4f51-93bc-482caa7cce09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019704614s May 30 22:26:32.594: INFO: Pod "pod-configmaps-90f8c4bf-5c69-4f51-93bc-482caa7cce09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023714996s STEP: Saw pod success May 30 22:26:32.594: INFO: Pod "pod-configmaps-90f8c4bf-5c69-4f51-93bc-482caa7cce09" satisfied condition "success or failure" May 30 22:26:32.597: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-90f8c4bf-5c69-4f51-93bc-482caa7cce09 container env-test: STEP: delete the pod May 30 22:26:32.635: INFO: Waiting for pod pod-configmaps-90f8c4bf-5c69-4f51-93bc-482caa7cce09 to disappear May 30 22:26:32.642: INFO: Pod pod-configmaps-90f8c4bf-5c69-4f51-93bc-482caa7cce09 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:26:32.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2800" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4404,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:26:32.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 30 22:26:32.735: INFO: Waiting up to 5m0s for pod "pod-8149cf3f-f170-46ff-9714-c5bc030b6dd2" in namespace "emptydir-8430" to be "success or failure" May 30 22:26:32.738: INFO: Pod "pod-8149cf3f-f170-46ff-9714-c5bc030b6dd2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.437629ms May 30 22:26:34.742: INFO: Pod "pod-8149cf3f-f170-46ff-9714-c5bc030b6dd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007557005s May 30 22:26:36.746: INFO: Pod "pod-8149cf3f-f170-46ff-9714-c5bc030b6dd2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011714383s May 30 22:26:38.751: INFO: Pod "pod-8149cf3f-f170-46ff-9714-c5bc030b6dd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016000006s STEP: Saw pod success May 30 22:26:38.751: INFO: Pod "pod-8149cf3f-f170-46ff-9714-c5bc030b6dd2" satisfied condition "success or failure" May 30 22:26:38.754: INFO: Trying to get logs from node jerma-worker pod pod-8149cf3f-f170-46ff-9714-c5bc030b6dd2 container test-container: STEP: delete the pod May 30 22:26:38.787: INFO: Waiting for pod pod-8149cf3f-f170-46ff-9714-c5bc030b6dd2 to disappear May 30 22:26:38.799: INFO: Pod pod-8149cf3f-f170-46ff-9714-c5bc030b6dd2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:26:38.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8430" for this suite. • [SLOW TEST:6.159 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4406,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:26:38.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8573, will wait for the garbage collector to delete the pods May 30 22:26:42.941: INFO: Deleting Job.batch foo took: 6.419213ms May 30 22:26:43.242: INFO: Terminating Job.batch foo pods took: 300.213331ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:27:19.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8573" for this suite. • [SLOW TEST:40.745 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":271,"skipped":4421,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:27:19.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 30 22:27:19.617: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-db3d6cff-e670-4554-9e7f-7f85784c3a3a" in namespace "security-context-test-2588" to be "success or failure" May 30 22:27:19.621: INFO: Pod "busybox-readonly-false-db3d6cff-e670-4554-9e7f-7f85784c3a3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043701ms May 30 22:27:21.625: INFO: Pod "busybox-readonly-false-db3d6cff-e670-4554-9e7f-7f85784c3a3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008315062s May 30 22:27:23.628: INFO: Pod "busybox-readonly-false-db3d6cff-e670-4554-9e7f-7f85784c3a3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011020424s May 30 22:27:23.628: INFO: Pod "busybox-readonly-false-db3d6cff-e670-4554-9e7f-7f85784c3a3a" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:27:23.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2588" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4431,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:27:23.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 30 22:27:23.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-618' May 30 22:27:23.841: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 30 22:27:23.841: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created May 30 22:27:23.853: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 30 22:27:23.856: INFO: scanned /root for discovery docs: May 30 22:27:23.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-618' May 30 22:27:40.723: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 30 22:27:40.724: INFO: stdout: "Created e2e-test-httpd-rc-902928426cfafa9d4012ed0065a4a6d0\nScaling up e2e-test-httpd-rc-902928426cfafa9d4012ed0065a4a6d0 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-902928426cfafa9d4012ed0065a4a6d0 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-902928426cfafa9d4012ed0065a4a6d0 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" May 30 22:27:40.724: INFO: stdout: "Created e2e-test-httpd-rc-902928426cfafa9d4012ed0065a4a6d0\nScaling up e2e-test-httpd-rc-902928426cfafa9d4012ed0065a4a6d0 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-902928426cfafa9d4012ed0065a4a6d0 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-902928426cfafa9d4012ed0065a4a6d0 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. May 30 22:27:40.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-618' May 30 22:27:40.826: INFO: stderr: "" May 30 22:27:40.826: INFO: stdout: "e2e-test-httpd-rc-902928426cfafa9d4012ed0065a4a6d0-rjw4r " May 30 22:27:40.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-902928426cfafa9d4012ed0065a4a6d0-rjw4r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-618' May 30 22:27:40.929: INFO: stderr: "" May 30 22:27:40.929: INFO: stdout: "true" May 30 22:27:40.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-902928426cfafa9d4012ed0065a4a6d0-rjw4r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-618' May 30 22:27:41.026: INFO: stderr: "" May 30 22:27:41.026: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" May 30 22:27:41.027: INFO: e2e-test-httpd-rc-902928426cfafa9d4012ed0065a4a6d0-rjw4r is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 May 30 22:27:41.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-618' May 30 22:27:41.157: INFO: stderr: "" May 30 22:27:41.158: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:27:41.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-618" for this suite. • [SLOW TEST:17.543 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":273,"skipped":4473,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:27:41.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-3f211076-2afc-4681-aa75-753ab9ee0338 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:27:41.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7841" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":274,"skipped":4497,"failed":0} SSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:27:41.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-2766 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2766 to expose endpoints map[] May 30 22:27:41.714: INFO: Get endpoints failed (2.938839ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 30 22:27:42.717: INFO: successfully validated that service multi-endpoint-test in namespace services-2766 exposes endpoints map[] (1.006543999s elapsed) STEP: Creating pod pod1 in namespace services-2766 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2766 to expose endpoints map[pod1:[100]] May 30 22:27:46.841: INFO: successfully validated that service multi-endpoint-test in namespace services-2766 exposes endpoints map[pod1:[100]] (4.118297243s elapsed) STEP: Creating pod pod2 in namespace services-2766 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2766 to expose endpoints map[pod1:[100] pod2:[101]] May 30 22:27:49.949: INFO: successfully validated that service multi-endpoint-test in namespace services-2766 exposes endpoints map[pod1:[100] pod2:[101]] (3.104528857s elapsed) STEP: Deleting pod pod1 in namespace services-2766 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2766 to expose endpoints map[pod2:[101]] May 30 22:27:51.036: INFO: successfully validated that service multi-endpoint-test in namespace services-2766 exposes endpoints map[pod2:[101]] (1.082014607s elapsed) STEP: Deleting pod pod2 in namespace services-2766 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2766 to expose endpoints map[] May 30 22:27:52.121: INFO: successfully validated that service multi-endpoint-test in namespace services-2766 exposes endpoints map[] (1.080884543s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:27:52.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2766" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.074 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":275,"skipped":4500,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:27:52.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:27:57.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1013" for this suite. • [SLOW TEST:5.112 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":276,"skipped":4503,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:27:57.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:28:03.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3920" for this suite. STEP: Destroying namespace "nsdeletetest-6213" for this suite. May 30 22:28:03.710: INFO: Namespace nsdeletetest-6213 was already deleted STEP: Destroying namespace "nsdeletetest-9913" for this suite. • [SLOW TEST:6.290 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":277,"skipped":4540,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 30 22:28:03.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 30 22:28:03.808: INFO: Waiting up to 5m0s for pod "pod-9e43193e-22a0-42d7-9a69-1cb76606a7dd" in namespace "emptydir-9972" to be "success or failure" May 30 22:28:03.839: INFO: Pod "pod-9e43193e-22a0-42d7-9a69-1cb76606a7dd": Phase="Pending", Reason="", readiness=false. Elapsed: 31.178329ms May 30 22:28:05.844: INFO: Pod "pod-9e43193e-22a0-42d7-9a69-1cb76606a7dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035816371s May 30 22:28:07.848: INFO: Pod "pod-9e43193e-22a0-42d7-9a69-1cb76606a7dd": Phase="Running", Reason="", readiness=true. Elapsed: 4.039878464s May 30 22:28:09.852: INFO: Pod "pod-9e43193e-22a0-42d7-9a69-1cb76606a7dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043757424s STEP: Saw pod success May 30 22:28:09.852: INFO: Pod "pod-9e43193e-22a0-42d7-9a69-1cb76606a7dd" satisfied condition "success or failure" May 30 22:28:09.854: INFO: Trying to get logs from node jerma-worker2 pod pod-9e43193e-22a0-42d7-9a69-1cb76606a7dd container test-container: STEP: delete the pod May 30 22:28:09.887: INFO: Waiting for pod pod-9e43193e-22a0-42d7-9a69-1cb76606a7dd to disappear May 30 22:28:09.892: INFO: Pod pod-9e43193e-22a0-42d7-9a69-1cb76606a7dd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 30 22:28:09.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9972" for this suite. • [SLOW TEST:6.185 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4559,"failed":0} SSSSSMay 30 22:28:09.898: INFO: Running AfterSuite actions on all nodes May 30 22:28:09.898: INFO: Running AfterSuite actions on node 1 May 30 22:28:09.898: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4737.549 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS