I0525 10:49:49.039484 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0525 10:49:49.039693 7 e2e.go:124] Starting e2e run "bdb4397a-25df-41e0-9572-afa1e212f873" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1590403787 - Will randomize all specs Will run 275 of 4992 specs May 25 10:49:49.094: INFO: >>> kubeConfig: /root/.kube/config May 25 10:49:49.107: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 25 10:49:49.131: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 25 10:49:49.163: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 25 10:49:49.163: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 25 10:49:49.163: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 25 10:49:49.175: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 25 10:49:49.175: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 25 10:49:49.175: INFO: e2e test version: v1.18.2 May 25 10:49:49.176: INFO: kube-apiserver version: v1.18.2 May 25 10:49:49.177: INFO: >>> kubeConfig: /root/.kube/config May 25 10:49:49.181: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:49:49.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected May 25 10:49:49.266: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-59f3ca3a-6683-4672-a111-4f3dc75c2294 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-59f3ca3a-6683-4672-a111-4f3dc75c2294 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:49:55.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1506" for this suite. • [SLOW TEST:6.216 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":33,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:49:55.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod May 25 10:50:00.052: INFO: Successfully updated pod "annotationupdate784777c3-06e0-483f-ac47-bacbd83a5bb1" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:50:02.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5627" for this suite. • [SLOW TEST:6.829 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":2,"skipped":74,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:50:02.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7298.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7298.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7298.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7298.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7298.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7298.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7298.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7298.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7298.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7298.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7298.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 199.118.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.118.199_udp@PTR;check="$$(dig +tcp +noall +answer +search 199.118.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.118.199_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7298.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7298.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7298.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7298.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7298.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7298.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7298.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7298.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7298.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7298.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7298.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 199.118.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.118.199_udp@PTR;check="$$(dig +tcp +noall +answer +search 199.118.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.118.199_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 10:50:10.645: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:10.649: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:10.652: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:10.721: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:10.724: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:10.745: INFO: Lookups using dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462 failed for: [wheezy_tcp@dns-test-service.dns-7298.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local] May 25 10:50:15.759: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:15.763: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:15.792: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:15.795: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:15.817: INFO: Lookups using dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local] May 25 10:50:20.759: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:20.763: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:20.793: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:20.797: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:20.817: INFO: Lookups using dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local] May 25 10:50:25.937: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:25.940: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:26.146: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:26.148: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:26.170: INFO: Lookups using dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local] May 25 10:50:30.759: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:30.763: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:30.789: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:30.792: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:30.807: INFO: Lookups using dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local] May 25 10:50:35.786: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:35.789: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:35.818: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:35.821: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local from pod dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462: the server could not find the requested resource (get pods dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462) May 25 10:50:35.839: INFO: Lookups using dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7298.svc.cluster.local] May 25 10:50:40.818: INFO: DNS probes using dns-7298/dns-test-732a0308-33e5-43ef-a968-8ce1c5e1c462 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:50:41.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7298" for this suite. • [SLOW TEST:39.357 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":3,"skipped":123,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:50:41.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 25 10:50:41.751: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cee36f62-268c-4fd0-8ecf-9f8b0d1e5c10" in namespace "downward-api-47" to be "Succeeded or Failed" May 25 10:50:41.773: INFO: Pod "downwardapi-volume-cee36f62-268c-4fd0-8ecf-9f8b0d1e5c10": Phase="Pending", Reason="", readiness=false. Elapsed: 21.986658ms May 25 10:50:44.022: INFO: Pod "downwardapi-volume-cee36f62-268c-4fd0-8ecf-9f8b0d1e5c10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270311s May 25 10:50:46.082: INFO: Pod "downwardapi-volume-cee36f62-268c-4fd0-8ecf-9f8b0d1e5c10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.33027403s STEP: Saw pod success May 25 10:50:46.082: INFO: Pod "downwardapi-volume-cee36f62-268c-4fd0-8ecf-9f8b0d1e5c10" satisfied condition "Succeeded or Failed" May 25 10:50:46.084: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-cee36f62-268c-4fd0-8ecf-9f8b0d1e5c10 container client-container: STEP: delete the pod May 25 10:50:46.121: INFO: Waiting for pod downwardapi-volume-cee36f62-268c-4fd0-8ecf-9f8b0d1e5c10 to disappear May 25 10:50:46.137: INFO: Pod downwardapi-volume-cee36f62-268c-4fd0-8ecf-9f8b0d1e5c10 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:50:46.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-47" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":4,"skipped":131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:50:46.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:50:57.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8730" for this suite. • [SLOW TEST:11.307 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":5,"skipped":171,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:50:57.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 25 10:50:57.738: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8126f014-73e2-4eb5-96bc-ead637b1cbe9" in namespace "projected-1103" to be "Succeeded or Failed" May 25 10:50:57.743: INFO: Pod "downwardapi-volume-8126f014-73e2-4eb5-96bc-ead637b1cbe9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.556823ms May 25 10:50:59.747: INFO: Pod "downwardapi-volume-8126f014-73e2-4eb5-96bc-ead637b1cbe9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009218774s May 25 10:51:01.752: INFO: Pod "downwardapi-volume-8126f014-73e2-4eb5-96bc-ead637b1cbe9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013566173s STEP: Saw pod success May 25 10:51:01.752: INFO: Pod "downwardapi-volume-8126f014-73e2-4eb5-96bc-ead637b1cbe9" satisfied condition "Succeeded or Failed" May 25 10:51:01.755: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-8126f014-73e2-4eb5-96bc-ead637b1cbe9 container client-container: STEP: delete the pod May 25 10:51:01.786: INFO: Waiting for pod downwardapi-volume-8126f014-73e2-4eb5-96bc-ead637b1cbe9 to disappear May 25 10:51:01.790: INFO: Pod downwardapi-volume-8126f014-73e2-4eb5-96bc-ead637b1cbe9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:51:01.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1103" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":219,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:51:01.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-302a0c5a-d4b5-4688-813d-1b0edfc89ef4 STEP: Creating a pod to test consume configMaps May 25 10:51:01.936: INFO: Waiting up to 5m0s for pod "pod-configmaps-45aba64b-4f88-42d3-b7a5-062aac35041d" in namespace "configmap-3084" to be "Succeeded or Failed" May 25 10:51:01.953: INFO: Pod "pod-configmaps-45aba64b-4f88-42d3-b7a5-062aac35041d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.106385ms May 25 10:51:04.027: INFO: Pod "pod-configmaps-45aba64b-4f88-42d3-b7a5-062aac35041d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09121233s May 25 10:51:06.032: INFO: Pod "pod-configmaps-45aba64b-4f88-42d3-b7a5-062aac35041d": Phase="Running", Reason="", readiness=true. Elapsed: 4.096477606s May 25 10:51:08.036: INFO: Pod "pod-configmaps-45aba64b-4f88-42d3-b7a5-062aac35041d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100577135s STEP: Saw pod success May 25 10:51:08.036: INFO: Pod "pod-configmaps-45aba64b-4f88-42d3-b7a5-062aac35041d" satisfied condition "Succeeded or Failed" May 25 10:51:08.063: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-45aba64b-4f88-42d3-b7a5-062aac35041d container configmap-volume-test: STEP: delete the pod May 25 10:51:08.120: INFO: Waiting for pod pod-configmaps-45aba64b-4f88-42d3-b7a5-062aac35041d to disappear May 25 10:51:08.138: INFO: Pod pod-configmaps-45aba64b-4f88-42d3-b7a5-062aac35041d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:51:08.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3084" for this suite. • [SLOW TEST:6.351 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":221,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:51:08.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:51:25.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7534" for this suite. • [SLOW TEST:17.158 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":8,"skipped":248,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:51:25.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD May 25 10:51:25.351: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:51:41.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1993" for this suite. • [SLOW TEST:15.725 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":9,"skipped":275,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:51:41.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 25 10:51:41.159: INFO: Created pod &Pod{ObjectMeta:{dns-1284 dns-1284 /api/v1/namespaces/dns-1284/pods/dns-1284 9e8c9e1e-b929-4635-abc1-5c89cac0b8cd 7162391 0 2020-05-25 10:51:41 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-25 10:51:41 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mtm4k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mtm4k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mtm4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:51:41.175: INFO: The status of Pod dns-1284 is Pending, waiting for it to be Running (with Ready = true) May 25 10:51:43.180: INFO: The status of Pod dns-1284 is Pending, waiting for it to be Running (with Ready = true) May 25 10:51:45.195: INFO: The status of Pod dns-1284 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 25 10:51:45.195: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1284 PodName:dns-1284 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 10:51:45.195: INFO: >>> kubeConfig: /root/.kube/config I0525 10:51:45.226990 7 log.go:172] (0xc002082210) (0xc001b28f00) Create stream I0525 10:51:45.227025 7 log.go:172] (0xc002082210) (0xc001b28f00) Stream added, broadcasting: 1 I0525 10:51:45.229640 7 log.go:172] (0xc002082210) Reply frame received for 1 I0525 10:51:45.229682 7 log.go:172] (0xc002082210) (0xc001858be0) Create stream I0525 10:51:45.229710 7 log.go:172] (0xc002082210) (0xc001858be0) Stream added, broadcasting: 3 I0525 10:51:45.230717 7 log.go:172] (0xc002082210) Reply frame received for 3 I0525 10:51:45.230820 7 log.go:172] (0xc002082210) (0xc001b29180) Create stream I0525 10:51:45.230841 7 log.go:172] (0xc002082210) (0xc001b29180) Stream added, broadcasting: 5 I0525 10:51:45.231649 7 log.go:172] (0xc002082210) Reply frame received for 5 I0525 10:51:45.308873 7 log.go:172] (0xc002082210) Data frame received for 3 I0525 10:51:45.308909 7 log.go:172] (0xc001858be0) (3) Data frame handling I0525 10:51:45.308929 7 log.go:172] (0xc001858be0) (3) Data frame sent I0525 10:51:45.309936 7 log.go:172] (0xc002082210) Data frame received for 5 I0525 10:51:45.309973 7 log.go:172] (0xc002082210) Data frame received for 3 I0525 10:51:45.310008 7 log.go:172] (0xc001858be0) (3) Data frame handling I0525 10:51:45.310055 7 log.go:172] (0xc001b29180) (5) Data frame handling I0525 10:51:45.312052 7 log.go:172] (0xc002082210) Data frame received for 1 I0525 10:51:45.312073 7 log.go:172] (0xc001b28f00) (1) Data frame handling I0525 10:51:45.312092 7 log.go:172] (0xc001b28f00) (1) Data frame sent I0525 10:51:45.312106 7 log.go:172] (0xc002082210) (0xc001b28f00) Stream removed, broadcasting: 1 I0525 10:51:45.312188 7 log.go:172] (0xc002082210) Go away received I0525 10:51:45.312497 7 log.go:172] (0xc002082210) (0xc001b28f00) Stream removed, broadcasting: 1 I0525 10:51:45.312512 7 log.go:172] (0xc002082210) (0xc001858be0) Stream removed, broadcasting: 3 I0525 10:51:45.312520 7 log.go:172] (0xc002082210) (0xc001b29180) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 25 10:51:45.312: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1284 PodName:dns-1284 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 10:51:45.312: INFO: >>> kubeConfig: /root/.kube/config I0525 10:51:45.347020 7 log.go:172] (0xc00212a370) (0xc0019643c0) Create stream I0525 10:51:45.347053 7 log.go:172] (0xc00212a370) (0xc0019643c0) Stream added, broadcasting: 1 I0525 10:51:45.349746 7 log.go:172] (0xc00212a370) Reply frame received for 1 I0525 10:51:45.349776 7 log.go:172] (0xc00212a370) (0xc001eec6e0) Create stream I0525 10:51:45.349788 7 log.go:172] (0xc00212a370) (0xc001eec6e0) Stream added, broadcasting: 3 I0525 10:51:45.350575 7 log.go:172] (0xc00212a370) Reply frame received for 3 I0525 10:51:45.350612 7 log.go:172] (0xc00212a370) (0xc001eec780) Create stream I0525 10:51:45.350625 7 log.go:172] (0xc00212a370) (0xc001eec780) Stream added, broadcasting: 5 I0525 10:51:45.351449 7 log.go:172] (0xc00212a370) Reply frame received for 5 I0525 10:51:45.422981 7 log.go:172] (0xc00212a370) Data frame received for 3 I0525 10:51:45.423024 7 log.go:172] (0xc001eec6e0) (3) Data frame handling I0525 10:51:45.423058 7 log.go:172] (0xc001eec6e0) (3) Data frame sent I0525 10:51:45.424362 7 log.go:172] (0xc00212a370) Data frame received for 5 I0525 10:51:45.424453 7 log.go:172] (0xc001eec780) (5) Data frame handling I0525 10:51:45.424542 7 log.go:172] (0xc00212a370) Data frame received for 3 I0525 10:51:45.424570 7 log.go:172] (0xc001eec6e0) (3) Data frame handling I0525 10:51:45.426615 7 log.go:172] (0xc00212a370) Data frame received for 1 I0525 10:51:45.426629 7 log.go:172] (0xc0019643c0) (1) Data frame handling I0525 10:51:45.426637 7 log.go:172] (0xc0019643c0) (1) Data frame sent I0525 10:51:45.426670 7 log.go:172] (0xc00212a370) (0xc0019643c0) Stream removed, broadcasting: 1 I0525 10:51:45.426731 7 log.go:172] (0xc00212a370) Go away received I0525 10:51:45.426835 7 log.go:172] (0xc00212a370) (0xc0019643c0) Stream removed, broadcasting: 1 I0525 10:51:45.426867 7 log.go:172] (0xc00212a370) (0xc001eec6e0) Stream removed, broadcasting: 3 I0525 10:51:45.426876 7 log.go:172] (0xc00212a370) (0xc001eec780) Stream removed, broadcasting: 5 May 25 10:51:45.426: INFO: Deleting pod dns-1284... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:51:45.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1284" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":10,"skipped":281,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:51:45.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 25 10:51:46.348: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-482 /api/v1/namespaces/watch-482/configmaps/e2e-watch-test-label-changed dbda28a3-ebf5-4d7c-991b-99d695d87b15 7162424 0 2020-05-25 10:51:46 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-25 10:51:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 25 10:51:46.349: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-482 /api/v1/namespaces/watch-482/configmaps/e2e-watch-test-label-changed dbda28a3-ebf5-4d7c-991b-99d695d87b15 7162425 0 2020-05-25 10:51:46 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-25 10:51:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 10:51:46.349: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-482 /api/v1/namespaces/watch-482/configmaps/e2e-watch-test-label-changed dbda28a3-ebf5-4d7c-991b-99d695d87b15 7162426 0 2020-05-25 10:51:46 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-25 10:51:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 25 10:51:56.423: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-482 /api/v1/namespaces/watch-482/configmaps/e2e-watch-test-label-changed dbda28a3-ebf5-4d7c-991b-99d695d87b15 7162471 0 2020-05-25 10:51:46 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-25 10:51:56 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 10:51:56.423: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-482 /api/v1/namespaces/watch-482/configmaps/e2e-watch-test-label-changed dbda28a3-ebf5-4d7c-991b-99d695d87b15 7162472 0 2020-05-25 10:51:46 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-25 10:51:56 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 10:51:56.423: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-482 /api/v1/namespaces/watch-482/configmaps/e2e-watch-test-label-changed dbda28a3-ebf5-4d7c-991b-99d695d87b15 7162473 0 2020-05-25 10:51:46 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-25 10:51:56 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:51:56.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-482" for this suite. • [SLOW TEST:10.629 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":11,"skipped":314,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:51:56.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-0105f9db-f4d8-479c-aef1-dcbdd79d0053 STEP: Creating configMap with name cm-test-opt-upd-c026ec86-46d6-4d96-85ba-7befc5f9b44b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-0105f9db-f4d8-479c-aef1-dcbdd79d0053 STEP: Updating configmap cm-test-opt-upd-c026ec86-46d6-4d96-85ba-7befc5f9b44b STEP: Creating configMap with name cm-test-opt-create-d49cf724-a6a6-433f-aa64-b9e563dcf70c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:53:35.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6704" for this suite. • [SLOW TEST:98.975 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":12,"skipped":321,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:53:35.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command May 25 10:53:35.522: INFO: Waiting up to 5m0s for pod "var-expansion-aed6f149-8e88-40b8-8533-228b17070de4" in namespace "var-expansion-4090" to be "Succeeded or Failed" May 25 10:53:35.526: INFO: Pod "var-expansion-aed6f149-8e88-40b8-8533-228b17070de4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.49507ms May 25 10:53:37.603: INFO: Pod "var-expansion-aed6f149-8e88-40b8-8533-228b17070de4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080403521s May 25 10:53:39.609: INFO: Pod "var-expansion-aed6f149-8e88-40b8-8533-228b17070de4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086105342s STEP: Saw pod success May 25 10:53:39.609: INFO: Pod "var-expansion-aed6f149-8e88-40b8-8533-228b17070de4" satisfied condition "Succeeded or Failed" May 25 10:53:39.612: INFO: Trying to get logs from node kali-worker pod var-expansion-aed6f149-8e88-40b8-8533-228b17070de4 container dapi-container: STEP: delete the pod May 25 10:53:39.694: INFO: Waiting for pod var-expansion-aed6f149-8e88-40b8-8533-228b17070de4 to disappear May 25 10:53:39.698: INFO: Pod var-expansion-aed6f149-8e88-40b8-8533-228b17070de4 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:53:39.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4090" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":353,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:53:39.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 25 10:53:39.885: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:53:53.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3028" for this suite. • [SLOW TEST:14.164 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":362,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:53:53.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:53:55.010: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 10:53:57.054: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000835, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000835, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000835, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000834, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:53:59.058: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000835, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000835, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000835, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000834, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:54:02.095: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:54:02.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1058" for this suite. STEP: Destroying namespace "webhook-1058-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.911 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":15,"skipped":364,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:54:02.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:54:04.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4094" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":16,"skipped":373,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:54:05.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-b3de648b-6ddb-4e73-8fd5-bee11e853476 STEP: Creating a pod to test consume configMaps May 25 10:54:06.639: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3a64e4e9-6a02-4c4d-abdb-a675a11dcd62" in namespace "projected-1879" to be "Succeeded or Failed" May 25 10:54:07.029: INFO: Pod "pod-projected-configmaps-3a64e4e9-6a02-4c4d-abdb-a675a11dcd62": Phase="Pending", Reason="", readiness=false. Elapsed: 390.554031ms May 25 10:54:09.155: INFO: Pod "pod-projected-configmaps-3a64e4e9-6a02-4c4d-abdb-a675a11dcd62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.516045111s May 25 10:54:11.660: INFO: Pod "pod-projected-configmaps-3a64e4e9-6a02-4c4d-abdb-a675a11dcd62": Phase="Pending", Reason="", readiness=false. Elapsed: 5.021117885s May 25 10:54:13.723: INFO: Pod "pod-projected-configmaps-3a64e4e9-6a02-4c4d-abdb-a675a11dcd62": Phase="Pending", Reason="", readiness=false. Elapsed: 7.084228729s May 25 10:54:15.749: INFO: Pod "pod-projected-configmaps-3a64e4e9-6a02-4c4d-abdb-a675a11dcd62": Phase="Pending", Reason="", readiness=false. Elapsed: 9.11016719s May 25 10:54:17.757: INFO: Pod "pod-projected-configmaps-3a64e4e9-6a02-4c4d-abdb-a675a11dcd62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.118028005s STEP: Saw pod success May 25 10:54:17.757: INFO: Pod "pod-projected-configmaps-3a64e4e9-6a02-4c4d-abdb-a675a11dcd62" satisfied condition "Succeeded or Failed" May 25 10:54:17.760: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-3a64e4e9-6a02-4c4d-abdb-a675a11dcd62 container projected-configmap-volume-test: STEP: delete the pod May 25 10:54:17.827: INFO: Waiting for pod pod-projected-configmaps-3a64e4e9-6a02-4c4d-abdb-a675a11dcd62 to disappear May 25 10:54:17.832: INFO: Pod pod-projected-configmaps-3a64e4e9-6a02-4c4d-abdb-a675a11dcd62 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:54:17.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1879" for this suite. • [SLOW TEST:12.584 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":17,"skipped":374,"failed":0} [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:54:17.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 25 10:54:17.956: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 25 10:54:22.960: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 25 10:54:22.960: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 25 10:54:24.965: INFO: Creating deployment "test-rollover-deployment" May 25 10:54:24.977: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 25 10:54:26.984: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 25 10:54:26.991: INFO: Ensure that both replica sets have 1 created replica May 25 10:54:26.997: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 25 10:54:27.006: INFO: Updating deployment test-rollover-deployment May 25 10:54:27.006: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 25 10:54:29.034: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 25 10:54:29.040: INFO: Make sure deployment "test-rollover-deployment" is complete May 25 10:54:29.162: INFO: all replica sets need to contain the pod-template-hash label May 25 10:54:29.162: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000865, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000865, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000867, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000864, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:54:31.170: INFO: all replica sets need to contain the pod-template-hash label May 25 10:54:31.170: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000865, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000865, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000870, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000864, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:54:33.171: INFO: all replica sets need to contain the pod-template-hash label May 25 10:54:33.171: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000865, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000865, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000870, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000864, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:54:35.170: INFO: all replica sets need to contain the pod-template-hash label May 25 10:54:35.170: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000865, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000865, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000870, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000864, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:54:37.169: INFO: all replica sets need to contain the pod-template-hash label May 25 10:54:37.169: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000865, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000865, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000870, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000864, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:54:39.170: INFO: all replica sets need to contain the pod-template-hash label May 25 10:54:39.170: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000865, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000865, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000870, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000864, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:54:41.169: INFO: May 25 10:54:41.169: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 May 25 10:54:41.178: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9049 /apis/apps/v1/namespaces/deployment-9049/deployments/test-rollover-deployment c5d5881d-8366-4d24-8813-c151cc5d473d 7163224 2 2020-05-25 10:54:24 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-25 10:54:27 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-25 10:54:40 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003d17f28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-25 10:54:25 +0000 UTC,LastTransitionTime:2020-05-25 10:54:25 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-05-25 10:54:40 +0000 UTC,LastTransitionTime:2020-05-25 10:54:24 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 25 10:54:41.182: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b deployment-9049 /apis/apps/v1/namespaces/deployment-9049/replicasets/test-rollover-deployment-84f7f6f64b 42558bd0-b52f-47a6-9c8c-a60fb501ea08 7163213 2 2020-05-25 10:54:27 +0000 UTC map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment c5d5881d-8366-4d24-8813-c151cc5d473d 0xc0043916d7 0xc0043916d8}] [] [{kube-controller-manager Update apps/v1 2020-05-25 10:54:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 53 100 53 56 56 49 100 45 56 51 54 54 45 52 100 50 52 45 56 56 49 51 45 99 49 53 49 99 99 53 100 52 55 51 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004391768 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 25 10:54:41.182: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 25 10:54:41.182: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9049 /apis/apps/v1/namespaces/deployment-9049/replicasets/test-rollover-controller e326b4e5-1c95-4400-99ca-d20a971aa85a 7163223 2 2020-05-25 10:54:17 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment c5d5881d-8366-4d24-8813-c151cc5d473d 0xc0043914bf 0xc0043914d0}] [] [{e2e.test Update apps/v1 2020-05-25 10:54:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-25 10:54:40 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 53 100 53 56 56 49 100 45 56 51 54 54 45 52 100 50 52 45 56 56 49 51 45 99 49 53 49 99 99 53 100 52 55 51 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004391568 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 10:54:41.183: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-9049 /apis/apps/v1/namespaces/deployment-9049/replicasets/test-rollover-deployment-5686c4cfd5 4629534c-53d9-4aea-a72f-b2850efd4d5b 7163159 2 2020-05-25 10:54:24 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment c5d5881d-8366-4d24-8813-c151cc5d473d 0xc0043915d7 0xc0043915d8}] [] [{kube-controller-manager Update apps/v1 2020-05-25 10:54:27 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 53 100 53 56 56 49 100 45 56 51 54 54 45 52 100 50 52 45 56 56 49 51 45 99 49 53 49 99 99 53 100 52 55 51 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004391668 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 10:54:41.187: INFO: Pod "test-rollover-deployment-84f7f6f64b-6bwnx" is available: &Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-6bwnx test-rollover-deployment-84f7f6f64b- deployment-9049 /api/v1/namespaces/deployment-9049/pods/test-rollover-deployment-84f7f6f64b-6bwnx 62510883-c637-433e-9d9e-a44e01eeb37f 7163181 0 2020-05-25 10:54:27 +0000 UTC map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b 42558bd0-b52f-47a6-9c8c-a60fb501ea08 0xc004391d37 0xc004391d38}] [] [{kube-controller-manager Update v1 2020-05-25 10:54:27 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 50 53 53 56 98 100 48 45 98 53 50 102 45 52 55 97 54 45 57 99 56 99 45 97 54 48 102 98 53 48 49 101 97 48 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 10:54:30 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 55 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q8g8j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q8g8j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q8g8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 10:54:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 10:54:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 10:54:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 10:54:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.73,StartTime:2020-05-25 10:54:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 10:54:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://f18c64284a076344045a414a3d9a3f8fb08f03b74c7a0205a7b8c8dbde3faf06,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.73,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:54:41.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9049" for this suite. • [SLOW TEST:23.353 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":18,"skipped":374,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:54:41.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:54:42.185: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 10:54:44.196: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000882, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000882, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000882, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000882, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:54:46.200: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000882, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000882, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000882, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000882, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:54:49.254: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 25 10:54:49.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:54:50.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2085" for this suite. STEP: Destroying namespace "webhook-2085-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.688 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":19,"skipped":378,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:54:50.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-15a15787-3a5a-4218-861d-a95a3f5c4934 STEP: Creating a pod to test consume secrets May 25 10:54:51.045: INFO: Waiting up to 5m0s for pod "pod-secrets-22167e29-13da-496d-b6f1-208f0b0ba7f1" in namespace "secrets-8057" to be "Succeeded or Failed" May 25 10:54:51.062: INFO: Pod "pod-secrets-22167e29-13da-496d-b6f1-208f0b0ba7f1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.087617ms May 25 10:54:53.066: INFO: Pod "pod-secrets-22167e29-13da-496d-b6f1-208f0b0ba7f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02014711s May 25 10:54:55.177: INFO: Pod "pod-secrets-22167e29-13da-496d-b6f1-208f0b0ba7f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131469188s May 25 10:54:57.182: INFO: Pod "pod-secrets-22167e29-13da-496d-b6f1-208f0b0ba7f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.136450238s STEP: Saw pod success May 25 10:54:57.182: INFO: Pod "pod-secrets-22167e29-13da-496d-b6f1-208f0b0ba7f1" satisfied condition "Succeeded or Failed" May 25 10:54:57.185: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-22167e29-13da-496d-b6f1-208f0b0ba7f1 container secret-volume-test: STEP: delete the pod May 25 10:54:57.219: INFO: Waiting for pod pod-secrets-22167e29-13da-496d-b6f1-208f0b0ba7f1 to disappear May 25 10:54:57.223: INFO: Pod pod-secrets-22167e29-13da-496d-b6f1-208f0b0ba7f1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:54:57.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8057" for this suite. • [SLOW TEST:6.347 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":385,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:54:57.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 25 10:55:01.859: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2123eda4-4a6c-4df8-9ee7-1ed9a8607841" May 25 10:55:01.860: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2123eda4-4a6c-4df8-9ee7-1ed9a8607841" in namespace "pods-7617" to be "terminated due to deadline exceeded" May 25 10:55:01.908: INFO: Pod "pod-update-activedeadlineseconds-2123eda4-4a6c-4df8-9ee7-1ed9a8607841": Phase="Running", Reason="", readiness=true. Elapsed: 48.757341ms May 25 10:55:03.912: INFO: Pod "pod-update-activedeadlineseconds-2123eda4-4a6c-4df8-9ee7-1ed9a8607841": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.05290073s May 25 10:55:03.913: INFO: Pod "pod-update-activedeadlineseconds-2123eda4-4a6c-4df8-9ee7-1ed9a8607841" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:55:03.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7617" for this suite. • [SLOW TEST:6.689 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":21,"skipped":390,"failed":0} SSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:55:03.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-6088 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6088 to expose endpoints map[] May 25 10:55:04.183: INFO: Get endpoints failed (3.743726ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 25 10:55:05.187: INFO: successfully validated that service multi-endpoint-test in namespace services-6088 exposes endpoints map[] (1.007489196s elapsed) STEP: Creating pod pod1 in namespace services-6088 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6088 to expose endpoints map[pod1:[100]] May 25 10:55:09.918: INFO: successfully validated that service multi-endpoint-test in namespace services-6088 exposes endpoints map[pod1:[100]] (4.724366755s elapsed) STEP: Creating pod pod2 in namespace services-6088 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6088 to expose endpoints map[pod1:[100] pod2:[101]] May 25 10:55:14.534: INFO: successfully validated that service multi-endpoint-test in namespace services-6088 exposes endpoints map[pod1:[100] pod2:[101]] (4.560948733s elapsed) STEP: Deleting pod pod1 in namespace services-6088 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6088 to expose endpoints map[pod2:[101]] May 25 10:55:15.606: INFO: successfully validated that service multi-endpoint-test in namespace services-6088 exposes endpoints map[pod2:[101]] (1.066892994s elapsed) STEP: Deleting pod pod2 in namespace services-6088 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6088 to expose endpoints map[] May 25 10:55:16.625: INFO: successfully validated that service multi-endpoint-test in namespace services-6088 exposes endpoints map[] (1.012992695s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:55:16.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6088" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.814 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":22,"skipped":395,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:55:16.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 25 10:55:16.894: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de1d16ca-3101-4da6-ba46-fcd09e3cb5a1" in namespace "downward-api-1056" to be "Succeeded or Failed" May 25 10:55:16.951: INFO: Pod "downwardapi-volume-de1d16ca-3101-4da6-ba46-fcd09e3cb5a1": Phase="Pending", Reason="", readiness=false. Elapsed: 56.183783ms May 25 10:55:18.955: INFO: Pod "downwardapi-volume-de1d16ca-3101-4da6-ba46-fcd09e3cb5a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060424023s May 25 10:55:20.999: INFO: Pod "downwardapi-volume-de1d16ca-3101-4da6-ba46-fcd09e3cb5a1": Phase="Running", Reason="", readiness=true. Elapsed: 4.104591201s May 25 10:55:23.004: INFO: Pod "downwardapi-volume-de1d16ca-3101-4da6-ba46-fcd09e3cb5a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.10956612s STEP: Saw pod success May 25 10:55:23.004: INFO: Pod "downwardapi-volume-de1d16ca-3101-4da6-ba46-fcd09e3cb5a1" satisfied condition "Succeeded or Failed" May 25 10:55:23.007: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-de1d16ca-3101-4da6-ba46-fcd09e3cb5a1 container client-container: STEP: delete the pod May 25 10:55:23.234: INFO: Waiting for pod downwardapi-volume-de1d16ca-3101-4da6-ba46-fcd09e3cb5a1 to disappear May 25 10:55:23.309: INFO: Pod downwardapi-volume-de1d16ca-3101-4da6-ba46-fcd09e3cb5a1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:55:23.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1056" for this suite. • [SLOW TEST:6.591 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":437,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:55:23.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs May 25 10:55:23.473: INFO: Waiting up to 5m0s for pod "pod-81cca10c-0e68-4ed7-86df-afc0faaa697e" in namespace "emptydir-1" to be "Succeeded or Failed" May 25 10:55:23.485: INFO: Pod "pod-81cca10c-0e68-4ed7-86df-afc0faaa697e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.669264ms May 25 10:55:25.490: INFO: Pod "pod-81cca10c-0e68-4ed7-86df-afc0faaa697e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017397305s May 25 10:55:27.494: INFO: Pod "pod-81cca10c-0e68-4ed7-86df-afc0faaa697e": Phase="Running", Reason="", readiness=true. Elapsed: 4.021446898s May 25 10:55:29.499: INFO: Pod "pod-81cca10c-0e68-4ed7-86df-afc0faaa697e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02638771s STEP: Saw pod success May 25 10:55:29.499: INFO: Pod "pod-81cca10c-0e68-4ed7-86df-afc0faaa697e" satisfied condition "Succeeded or Failed" May 25 10:55:29.502: INFO: Trying to get logs from node kali-worker pod pod-81cca10c-0e68-4ed7-86df-afc0faaa697e container test-container: STEP: delete the pod May 25 10:55:29.642: INFO: Waiting for pod pod-81cca10c-0e68-4ed7-86df-afc0faaa697e to disappear May 25 10:55:29.675: INFO: Pod pod-81cca10c-0e68-4ed7-86df-afc0faaa697e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:55:29.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1" for this suite. • [SLOW TEST:6.357 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":24,"skipped":462,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:55:29.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:55:29.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7036" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":25,"skipped":469,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:55:29.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 25 10:55:34.154: INFO: &Pod{ObjectMeta:{send-events-8d8bb0a1-ccb1-4442-8f1d-b812dec14970 events-1784 /api/v1/namespaces/events-1784/pods/send-events-8d8bb0a1-ccb1-4442-8f1d-b812dec14970 8096e53a-3b96-47ee-bfbc-8163e1b5ec89 7163662 0 2020-05-25 10:55:30 +0000 UTC map[name:foo time:104397075] map[] [] [] [{e2e.test Update v1 2020-05-25 10:55:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 10:55:33 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 55 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4jx6v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4jx6v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4jx6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 10:55:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 10:55:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 10:55:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 10:55:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.78,StartTime:2020-05-25 10:55:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 10:55:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://d7d4d0f0a95d8ab38c423a8ab2fb13a1964a4ec73fece49dcded7255815be205,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.78,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 25 10:55:36.159: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 25 10:55:38.164: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:55:38.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1784" for this suite. • [SLOW TEST:8.300 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":26,"skipped":523,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:55:38.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-a444a892-13a7-4dab-982e-42917d73c420 STEP: Creating a pod to test consume configMaps May 25 10:55:38.439: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0d5a29fb-ef93-4d04-a738-96376d83f0e6" in namespace "projected-5455" to be "Succeeded or Failed" May 25 10:55:38.463: INFO: Pod "pod-projected-configmaps-0d5a29fb-ef93-4d04-a738-96376d83f0e6": Phase="Pending", Reason="", readiness=false. Elapsed: 23.555688ms May 25 10:55:40.468: INFO: Pod "pod-projected-configmaps-0d5a29fb-ef93-4d04-a738-96376d83f0e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028294843s May 25 10:55:42.473: INFO: Pod "pod-projected-configmaps-0d5a29fb-ef93-4d04-a738-96376d83f0e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033080041s STEP: Saw pod success May 25 10:55:42.473: INFO: Pod "pod-projected-configmaps-0d5a29fb-ef93-4d04-a738-96376d83f0e6" satisfied condition "Succeeded or Failed" May 25 10:55:42.476: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-0d5a29fb-ef93-4d04-a738-96376d83f0e6 container projected-configmap-volume-test: STEP: delete the pod May 25 10:55:42.496: INFO: Waiting for pod pod-projected-configmaps-0d5a29fb-ef93-4d04-a738-96376d83f0e6 to disappear May 25 10:55:42.500: INFO: Pod pod-projected-configmaps-0d5a29fb-ef93-4d04-a738-96376d83f0e6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:55:42.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5455" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":528,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:55:42.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:55:43.358: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 10:55:45.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000943, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000943, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000943, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000943, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:55:47.424: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000943, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000943, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000943, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000943, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:55:50.459: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 25 10:55:50.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7154-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:55:51.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4382" for this suite. STEP: Destroying namespace "webhook-4382-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.292 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":28,"skipped":568,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:55:51.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-ccf12435-8b89-4afc-98c3-34fa9e68880c STEP: Creating a pod to test consume secrets May 25 10:55:51.930: INFO: Waiting up to 5m0s for pod "pod-secrets-906ef74c-2a89-4568-9a0b-bfe9b69e2f76" in namespace "secrets-3932" to be "Succeeded or Failed" May 25 10:55:51.947: INFO: Pod "pod-secrets-906ef74c-2a89-4568-9a0b-bfe9b69e2f76": Phase="Pending", Reason="", readiness=false. Elapsed: 17.217399ms May 25 10:55:53.952: INFO: Pod "pod-secrets-906ef74c-2a89-4568-9a0b-bfe9b69e2f76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021642219s May 25 10:55:55.956: INFO: Pod "pod-secrets-906ef74c-2a89-4568-9a0b-bfe9b69e2f76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026134209s STEP: Saw pod success May 25 10:55:55.956: INFO: Pod "pod-secrets-906ef74c-2a89-4568-9a0b-bfe9b69e2f76" satisfied condition "Succeeded or Failed" May 25 10:55:55.960: INFO: Trying to get logs from node kali-worker pod pod-secrets-906ef74c-2a89-4568-9a0b-bfe9b69e2f76 container secret-volume-test: STEP: delete the pod May 25 10:55:56.110: INFO: Waiting for pod pod-secrets-906ef74c-2a89-4568-9a0b-bfe9b69e2f76 to disappear May 25 10:55:56.118: INFO: Pod pod-secrets-906ef74c-2a89-4568-9a0b-bfe9b69e2f76 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:55:56.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3932" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":610,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:55:56.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-18dfbfc2-996f-4028-b8fc-4aca2c8a31ff STEP: Creating a pod to test consume secrets May 25 10:55:56.204: INFO: Waiting up to 5m0s for pod "pod-secrets-c53e292a-7fec-4dbc-adf9-3fabd61bfa30" in namespace "secrets-1240" to be "Succeeded or Failed" May 25 10:55:56.239: INFO: Pod "pod-secrets-c53e292a-7fec-4dbc-adf9-3fabd61bfa30": Phase="Pending", Reason="", readiness=false. Elapsed: 35.479892ms May 25 10:55:58.247: INFO: Pod "pod-secrets-c53e292a-7fec-4dbc-adf9-3fabd61bfa30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043188523s May 25 10:56:00.252: INFO: Pod "pod-secrets-c53e292a-7fec-4dbc-adf9-3fabd61bfa30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048153779s STEP: Saw pod success May 25 10:56:00.252: INFO: Pod "pod-secrets-c53e292a-7fec-4dbc-adf9-3fabd61bfa30" satisfied condition "Succeeded or Failed" May 25 10:56:00.255: INFO: Trying to get logs from node kali-worker pod pod-secrets-c53e292a-7fec-4dbc-adf9-3fabd61bfa30 container secret-volume-test: STEP: delete the pod May 25 10:56:00.324: INFO: Waiting for pod pod-secrets-c53e292a-7fec-4dbc-adf9-3fabd61bfa30 to disappear May 25 10:56:00.335: INFO: Pod pod-secrets-c53e292a-7fec-4dbc-adf9-3fabd61bfa30 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:56:00.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1240" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":30,"skipped":611,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:56:00.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 25 10:56:00.446: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05246b27-d394-4517-a25a-edb962665254" in namespace "downward-api-5082" to be "Succeeded or Failed" May 25 10:56:00.470: INFO: Pod "downwardapi-volume-05246b27-d394-4517-a25a-edb962665254": Phase="Pending", Reason="", readiness=false. Elapsed: 24.582337ms May 25 10:56:02.475: INFO: Pod "downwardapi-volume-05246b27-d394-4517-a25a-edb962665254": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029210818s May 25 10:56:04.480: INFO: Pod "downwardapi-volume-05246b27-d394-4517-a25a-edb962665254": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034001554s STEP: Saw pod success May 25 10:56:04.480: INFO: Pod "downwardapi-volume-05246b27-d394-4517-a25a-edb962665254" satisfied condition "Succeeded or Failed" May 25 10:56:04.483: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-05246b27-d394-4517-a25a-edb962665254 container client-container: STEP: delete the pod May 25 10:56:04.517: INFO: Waiting for pod downwardapi-volume-05246b27-d394-4517-a25a-edb962665254 to disappear May 25 10:56:04.528: INFO: Pod downwardapi-volume-05246b27-d394-4517-a25a-edb962665254 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:56:04.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5082" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":31,"skipped":666,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:56:04.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-22b3749a-9ee6-473b-bb7c-867e4b557f1e STEP: Creating a pod to test consume secrets May 25 10:56:04.683: INFO: Waiting up to 5m0s for pod "pod-secrets-16676b0d-cc2f-4e1b-9995-05ffb2c8c863" in namespace "secrets-4047" to be "Succeeded or Failed" May 25 10:56:04.695: INFO: Pod "pod-secrets-16676b0d-cc2f-4e1b-9995-05ffb2c8c863": Phase="Pending", Reason="", readiness=false. Elapsed: 11.945738ms May 25 10:56:06.700: INFO: Pod "pod-secrets-16676b0d-cc2f-4e1b-9995-05ffb2c8c863": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016661276s May 25 10:56:08.704: INFO: Pod "pod-secrets-16676b0d-cc2f-4e1b-9995-05ffb2c8c863": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021100035s STEP: Saw pod success May 25 10:56:08.704: INFO: Pod "pod-secrets-16676b0d-cc2f-4e1b-9995-05ffb2c8c863" satisfied condition "Succeeded or Failed" May 25 10:56:08.707: INFO: Trying to get logs from node kali-worker pod pod-secrets-16676b0d-cc2f-4e1b-9995-05ffb2c8c863 container secret-volume-test: STEP: delete the pod May 25 10:56:08.814: INFO: Waiting for pod pod-secrets-16676b0d-cc2f-4e1b-9995-05ffb2c8c863 to disappear May 25 10:56:08.839: INFO: Pod pod-secrets-16676b0d-cc2f-4e1b-9995-05ffb2c8c863 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:56:08.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4047" for this suite. STEP: Destroying namespace "secret-namespace-4643" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":678,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:56:08.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-4098 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4098 STEP: Deleting pre-stop pod May 25 10:56:22.242: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:56:22.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4098" for this suite. • [SLOW TEST:13.448 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":33,"skipped":699,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:56:22.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-253bb298-4ad6-4603-9952-c5e2d07b6b4f STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:56:26.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9008" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":702,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:56:26.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:56:27.654: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 10:56:29.666: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000987, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000987, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000987, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000987, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:56:32.709: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:56:34.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2413" for this suite. STEP: Destroying namespace "webhook-2413-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.527 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":35,"skipped":731,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:56:34.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 25 10:56:38.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9456" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":36,"skipped":759,"failed":0} SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 25 10:56:38.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 25 10:56:38.650: INFO: (0) /api/v1/nodes/kali-worker2:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 25 10:56:39.278: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 25 10:56:41.289: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000999, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000999, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000999, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726000999, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 25 10:56:44.328: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
May 25 10:56:48.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config attach --namespace=webhook-5262 to-be-attached-pod -i -c=container1'
May 25 10:56:51.582: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 10:56:51.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5262" for this suite.
STEP: Destroying namespace "webhook-5262-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.032 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":38,"skipped":775,"failed":0}
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 10:56:51.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May 25 10:56:56.172: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 10:56:56.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-263" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":779,"failed":0}

------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 10:56:56.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-d3c72823-f1d7-4ae5-a95b-44a923fb9920 in namespace container-probe-1446
May 25 10:57:00.522: INFO: Started pod busybox-d3c72823-f1d7-4ae5-a95b-44a923fb9920 in namespace container-probe-1446
STEP: checking the pod's current state and verifying that restartCount is present
May 25 10:57:00.526: INFO: Initial restart count of pod busybox-d3c72823-f1d7-4ae5-a95b-44a923fb9920 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:01:01.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1446" for this suite.

• [SLOW TEST:244.776 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":40,"skipped":779,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:01:01.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-70cdb10a-b067-41b7-89e5-758b692bfef1
STEP: Creating a pod to test consume configMaps
May 25 11:01:01.291: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1a323581-e09d-4524-87e9-7c9cc724301a" in namespace "projected-6321" to be "Succeeded or Failed"
May 25 11:01:01.306: INFO: Pod "pod-projected-configmaps-1a323581-e09d-4524-87e9-7c9cc724301a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.34257ms
May 25 11:01:03.397: INFO: Pod "pod-projected-configmaps-1a323581-e09d-4524-87e9-7c9cc724301a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106536191s
May 25 11:01:05.400: INFO: Pod "pod-projected-configmaps-1a323581-e09d-4524-87e9-7c9cc724301a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108979287s
May 25 11:01:07.403: INFO: Pod "pod-projected-configmaps-1a323581-e09d-4524-87e9-7c9cc724301a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.112466093s
STEP: Saw pod success
May 25 11:01:07.403: INFO: Pod "pod-projected-configmaps-1a323581-e09d-4524-87e9-7c9cc724301a" satisfied condition "Succeeded or Failed"
May 25 11:01:07.406: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-1a323581-e09d-4524-87e9-7c9cc724301a container projected-configmap-volume-test: 
STEP: delete the pod
May 25 11:01:07.460: INFO: Waiting for pod pod-projected-configmaps-1a323581-e09d-4524-87e9-7c9cc724301a to disappear
May 25 11:01:07.472: INFO: Pod pod-projected-configmaps-1a323581-e09d-4524-87e9-7c9cc724301a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:01:07.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6321" for this suite.

• [SLOW TEST:6.279 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":804,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:01:07.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-8fe3438c-a8cd-40bc-b26b-b14ee95aafcb in namespace container-probe-7076
May 25 11:01:11.595: INFO: Started pod liveness-8fe3438c-a8cd-40bc-b26b-b14ee95aafcb in namespace container-probe-7076
STEP: checking the pod's current state and verifying that restartCount is present
May 25 11:01:11.598: INFO: Initial restart count of pod liveness-8fe3438c-a8cd-40bc-b26b-b14ee95aafcb is 0
May 25 11:01:29.639: INFO: Restart count of pod container-probe-7076/liveness-8fe3438c-a8cd-40bc-b26b-b14ee95aafcb is now 1 (18.041304977s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:01:29.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7076" for this suite.

• [SLOW TEST:22.171 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":42,"skipped":813,"failed":0}
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:01:29.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:01:29.778: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:01:30.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5688" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":275,"completed":43,"skipped":813,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:01:30.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
May 25 11:01:40.997: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5211 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 25 11:01:40.997: INFO: >>> kubeConfig: /root/.kube/config
I0525 11:01:41.033019       7 log.go:172] (0xc00271c790) (0xc000c36960) Create stream
I0525 11:01:41.033047       7 log.go:172] (0xc00271c790) (0xc000c36960) Stream added, broadcasting: 1
I0525 11:01:41.035833       7 log.go:172] (0xc00271c790) Reply frame received for 1
I0525 11:01:41.035861       7 log.go:172] (0xc00271c790) (0xc0019641e0) Create stream
I0525 11:01:41.035872       7 log.go:172] (0xc00271c790) (0xc0019641e0) Stream added, broadcasting: 3
I0525 11:01:41.037002       7 log.go:172] (0xc00271c790) Reply frame received for 3
I0525 11:01:41.037056       7 log.go:172] (0xc00271c790) (0xc001753400) Create stream
I0525 11:01:41.037073       7 log.go:172] (0xc00271c790) (0xc001753400) Stream added, broadcasting: 5
I0525 11:01:41.038363       7 log.go:172] (0xc00271c790) Reply frame received for 5
I0525 11:01:41.130272       7 log.go:172] (0xc00271c790) Data frame received for 5
I0525 11:01:41.130324       7 log.go:172] (0xc001753400) (5) Data frame handling
I0525 11:01:41.130368       7 log.go:172] (0xc00271c790) Data frame received for 3
I0525 11:01:41.130514       7 log.go:172] (0xc0019641e0) (3) Data frame handling
I0525 11:01:41.130538       7 log.go:172] (0xc0019641e0) (3) Data frame sent
I0525 11:01:41.130552       7 log.go:172] (0xc00271c790) Data frame received for 3
I0525 11:01:41.130565       7 log.go:172] (0xc0019641e0) (3) Data frame handling
I0525 11:01:41.131643       7 log.go:172] (0xc00271c790) Data frame received for 1
I0525 11:01:41.131676       7 log.go:172] (0xc000c36960) (1) Data frame handling
I0525 11:01:41.131709       7 log.go:172] (0xc000c36960) (1) Data frame sent
I0525 11:01:41.131737       7 log.go:172] (0xc00271c790) (0xc000c36960) Stream removed, broadcasting: 1
I0525 11:01:41.131759       7 log.go:172] (0xc00271c790) Go away received
I0525 11:01:41.131941       7 log.go:172] (0xc00271c790) (0xc000c36960) Stream removed, broadcasting: 1
I0525 11:01:41.131965       7 log.go:172] (0xc00271c790) (0xc0019641e0) Stream removed, broadcasting: 3
I0525 11:01:41.131973       7 log.go:172] (0xc00271c790) (0xc001753400) Stream removed, broadcasting: 5
May 25 11:01:41.131: INFO: Exec stderr: ""
May 25 11:01:41.132: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5211 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 25 11:01:41.132: INFO: >>> kubeConfig: /root/.kube/config
I0525 11:01:41.170151       7 log.go:172] (0xc0029a1ad0) (0xc001753900) Create stream
I0525 11:01:41.170179       7 log.go:172] (0xc0029a1ad0) (0xc001753900) Stream added, broadcasting: 1
I0525 11:01:41.173317       7 log.go:172] (0xc0029a1ad0) Reply frame received for 1
I0525 11:01:41.173378       7 log.go:172] (0xc0029a1ad0) (0xc001753ae0) Create stream
I0525 11:01:41.173527       7 log.go:172] (0xc0029a1ad0) (0xc001753ae0) Stream added, broadcasting: 3
I0525 11:01:41.174667       7 log.go:172] (0xc0029a1ad0) Reply frame received for 3
I0525 11:01:41.174713       7 log.go:172] (0xc0029a1ad0) (0xc001964280) Create stream
I0525 11:01:41.174731       7 log.go:172] (0xc0029a1ad0) (0xc001964280) Stream added, broadcasting: 5
I0525 11:01:41.175689       7 log.go:172] (0xc0029a1ad0) Reply frame received for 5
I0525 11:01:41.235215       7 log.go:172] (0xc0029a1ad0) Data frame received for 5
I0525 11:01:41.235253       7 log.go:172] (0xc001964280) (5) Data frame handling
I0525 11:01:41.235277       7 log.go:172] (0xc0029a1ad0) Data frame received for 3
I0525 11:01:41.235290       7 log.go:172] (0xc001753ae0) (3) Data frame handling
I0525 11:01:41.235306       7 log.go:172] (0xc001753ae0) (3) Data frame sent
I0525 11:01:41.235318       7 log.go:172] (0xc0029a1ad0) Data frame received for 3
I0525 11:01:41.235328       7 log.go:172] (0xc001753ae0) (3) Data frame handling
I0525 11:01:41.236486       7 log.go:172] (0xc0029a1ad0) Data frame received for 1
I0525 11:01:41.236507       7 log.go:172] (0xc001753900) (1) Data frame handling
I0525 11:01:41.236526       7 log.go:172] (0xc001753900) (1) Data frame sent
I0525 11:01:41.236541       7 log.go:172] (0xc0029a1ad0) (0xc001753900) Stream removed, broadcasting: 1
I0525 11:01:41.236588       7 log.go:172] (0xc0029a1ad0) Go away received
I0525 11:01:41.236665       7 log.go:172] (0xc0029a1ad0) (0xc001753900) Stream removed, broadcasting: 1
I0525 11:01:41.236695       7 log.go:172] (0xc0029a1ad0) (0xc001753ae0) Stream removed, broadcasting: 3
I0525 11:01:41.236716       7 log.go:172] (0xc0029a1ad0) (0xc001964280) Stream removed, broadcasting: 5
May 25 11:01:41.236: INFO: Exec stderr: ""
May 25 11:01:41.236: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5211 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 25 11:01:41.236: INFO: >>> kubeConfig: /root/.kube/config
I0525 11:01:41.277100       7 log.go:172] (0xc00271cd10) (0xc000c36b40) Create stream
I0525 11:01:41.277285       7 log.go:172] (0xc00271cd10) (0xc000c36b40) Stream added, broadcasting: 1
I0525 11:01:41.280095       7 log.go:172] (0xc00271cd10) Reply frame received for 1
I0525 11:01:41.280139       7 log.go:172] (0xc00271cd10) (0xc001182780) Create stream
I0525 11:01:41.280160       7 log.go:172] (0xc00271cd10) (0xc001182780) Stream added, broadcasting: 3
I0525 11:01:41.281308       7 log.go:172] (0xc00271cd10) Reply frame received for 3
I0525 11:01:41.281381       7 log.go:172] (0xc00271cd10) (0xc001182960) Create stream
I0525 11:01:41.281397       7 log.go:172] (0xc00271cd10) (0xc001182960) Stream added, broadcasting: 5
I0525 11:01:41.282466       7 log.go:172] (0xc00271cd10) Reply frame received for 5
I0525 11:01:41.350240       7 log.go:172] (0xc00271cd10) Data frame received for 3
I0525 11:01:41.350287       7 log.go:172] (0xc001182780) (3) Data frame handling
I0525 11:01:41.350323       7 log.go:172] (0xc001182780) (3) Data frame sent
I0525 11:01:41.350376       7 log.go:172] (0xc00271cd10) Data frame received for 5
I0525 11:01:41.350407       7 log.go:172] (0xc001182960) (5) Data frame handling
I0525 11:01:41.350442       7 log.go:172] (0xc00271cd10) Data frame received for 3
I0525 11:01:41.350462       7 log.go:172] (0xc001182780) (3) Data frame handling
I0525 11:01:41.352205       7 log.go:172] (0xc00271cd10) Data frame received for 1
I0525 11:01:41.352249       7 log.go:172] (0xc000c36b40) (1) Data frame handling
I0525 11:01:41.352282       7 log.go:172] (0xc000c36b40) (1) Data frame sent
I0525 11:01:41.352305       7 log.go:172] (0xc00271cd10) (0xc000c36b40) Stream removed, broadcasting: 1
I0525 11:01:41.352329       7 log.go:172] (0xc00271cd10) Go away received
I0525 11:01:41.352422       7 log.go:172] (0xc00271cd10) (0xc000c36b40) Stream removed, broadcasting: 1
I0525 11:01:41.352446       7 log.go:172] (0xc00271cd10) (0xc001182780) Stream removed, broadcasting: 3
I0525 11:01:41.352453       7 log.go:172] (0xc00271cd10) (0xc001182960) Stream removed, broadcasting: 5
May 25 11:01:41.352: INFO: Exec stderr: ""
May 25 11:01:41.352: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5211 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 25 11:01:41.352: INFO: >>> kubeConfig: /root/.kube/config
I0525 11:01:41.391853       7 log.go:172] (0xc00187ca50) (0xc001964820) Create stream
I0525 11:01:41.391885       7 log.go:172] (0xc00187ca50) (0xc001964820) Stream added, broadcasting: 1
I0525 11:01:41.394518       7 log.go:172] (0xc00187ca50) Reply frame received for 1
I0525 11:01:41.394541       7 log.go:172] (0xc00187ca50) (0xc000c36be0) Create stream
I0525 11:01:41.394553       7 log.go:172] (0xc00187ca50) (0xc000c36be0) Stream added, broadcasting: 3
I0525 11:01:41.395665       7 log.go:172] (0xc00187ca50) Reply frame received for 3
I0525 11:01:41.395718       7 log.go:172] (0xc00187ca50) (0xc000c36c80) Create stream
I0525 11:01:41.395734       7 log.go:172] (0xc00187ca50) (0xc000c36c80) Stream added, broadcasting: 5
I0525 11:01:41.396677       7 log.go:172] (0xc00187ca50) Reply frame received for 5
I0525 11:01:41.459675       7 log.go:172] (0xc00187ca50) Data frame received for 5
I0525 11:01:41.459765       7 log.go:172] (0xc000c36c80) (5) Data frame handling
I0525 11:01:41.464711       7 log.go:172] (0xc00187ca50) Data frame received for 3
I0525 11:01:41.464732       7 log.go:172] (0xc000c36be0) (3) Data frame handling
I0525 11:01:41.464746       7 log.go:172] (0xc000c36be0) (3) Data frame sent
I0525 11:01:41.464755       7 log.go:172] (0xc00187ca50) Data frame received for 3
I0525 11:01:41.464772       7 log.go:172] (0xc000c36be0) (3) Data frame handling
I0525 11:01:41.466406       7 log.go:172] (0xc00187ca50) Data frame received for 1
I0525 11:01:41.466481       7 log.go:172] (0xc001964820) (1) Data frame handling
I0525 11:01:41.466513       7 log.go:172] (0xc001964820) (1) Data frame sent
I0525 11:01:41.466533       7 log.go:172] (0xc00187ca50) (0xc001964820) Stream removed, broadcasting: 1
I0525 11:01:41.466552       7 log.go:172] (0xc00187ca50) Go away received
I0525 11:01:41.466710       7 log.go:172] (0xc00187ca50) (0xc001964820) Stream removed, broadcasting: 1
I0525 11:01:41.466731       7 log.go:172] (0xc00187ca50) (0xc000c36be0) Stream removed, broadcasting: 3
I0525 11:01:41.466738       7 log.go:172] (0xc00187ca50) (0xc000c36c80) Stream removed, broadcasting: 5
May 25 11:01:41.466: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
May 25 11:01:41.466: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5211 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 25 11:01:41.466: INFO: >>> kubeConfig: /root/.kube/config
I0525 11:01:41.489798       7 log.go:172] (0xc002d92160) (0xc000bc2640) Create stream
I0525 11:01:41.489821       7 log.go:172] (0xc002d92160) (0xc000bc2640) Stream added, broadcasting: 1
I0525 11:01:41.492802       7 log.go:172] (0xc002d92160) Reply frame received for 1
I0525 11:01:41.492864       7 log.go:172] (0xc002d92160) (0xc000c36d20) Create stream
I0525 11:01:41.492882       7 log.go:172] (0xc002d92160) (0xc000c36d20) Stream added, broadcasting: 3
I0525 11:01:41.494056       7 log.go:172] (0xc002d92160) Reply frame received for 3
I0525 11:01:41.494085       7 log.go:172] (0xc002d92160) (0xc000bc2820) Create stream
I0525 11:01:41.494096       7 log.go:172] (0xc002d92160) (0xc000bc2820) Stream added, broadcasting: 5
I0525 11:01:41.494816       7 log.go:172] (0xc002d92160) Reply frame received for 5
I0525 11:01:41.562677       7 log.go:172] (0xc002d92160) Data frame received for 5
I0525 11:01:41.562738       7 log.go:172] (0xc000bc2820) (5) Data frame handling
I0525 11:01:41.562793       7 log.go:172] (0xc002d92160) Data frame received for 3
I0525 11:01:41.562814       7 log.go:172] (0xc000c36d20) (3) Data frame handling
I0525 11:01:41.562839       7 log.go:172] (0xc000c36d20) (3) Data frame sent
I0525 11:01:41.562853       7 log.go:172] (0xc002d92160) Data frame received for 3
I0525 11:01:41.562866       7 log.go:172] (0xc000c36d20) (3) Data frame handling
I0525 11:01:41.564489       7 log.go:172] (0xc002d92160) Data frame received for 1
I0525 11:01:41.564522       7 log.go:172] (0xc000bc2640) (1) Data frame handling
I0525 11:01:41.564549       7 log.go:172] (0xc000bc2640) (1) Data frame sent
I0525 11:01:41.564582       7 log.go:172] (0xc002d92160) (0xc000bc2640) Stream removed, broadcasting: 1
I0525 11:01:41.564617       7 log.go:172] (0xc002d92160) Go away received
I0525 11:01:41.564726       7 log.go:172] (0xc002d92160) (0xc000bc2640) Stream removed, broadcasting: 1
I0525 11:01:41.564748       7 log.go:172] (0xc002d92160) (0xc000c36d20) Stream removed, broadcasting: 3
I0525 11:01:41.564768       7 log.go:172] (0xc002d92160) (0xc000bc2820) Stream removed, broadcasting: 5
May 25 11:01:41.564: INFO: Exec stderr: ""
May 25 11:01:41.564: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5211 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 25 11:01:41.564: INFO: >>> kubeConfig: /root/.kube/config
I0525 11:01:41.600013       7 log.go:172] (0xc00271d3f0) (0xc000c37040) Create stream
I0525 11:01:41.600046       7 log.go:172] (0xc00271d3f0) (0xc000c37040) Stream added, broadcasting: 1
I0525 11:01:41.603118       7 log.go:172] (0xc00271d3f0) Reply frame received for 1
I0525 11:01:41.603152       7 log.go:172] (0xc00271d3f0) (0xc001182aa0) Create stream
I0525 11:01:41.603169       7 log.go:172] (0xc00271d3f0) (0xc001182aa0) Stream added, broadcasting: 3
I0525 11:01:41.604057       7 log.go:172] (0xc00271d3f0) Reply frame received for 3
I0525 11:01:41.604177       7 log.go:172] (0xc00271d3f0) (0xc001753b80) Create stream
I0525 11:01:41.604191       7 log.go:172] (0xc00271d3f0) (0xc001753b80) Stream added, broadcasting: 5
I0525 11:01:41.605362       7 log.go:172] (0xc00271d3f0) Reply frame received for 5
I0525 11:01:41.664746       7 log.go:172] (0xc00271d3f0) Data frame received for 5
I0525 11:01:41.664777       7 log.go:172] (0xc001753b80) (5) Data frame handling
I0525 11:01:41.664812       7 log.go:172] (0xc00271d3f0) Data frame received for 3
I0525 11:01:41.664827       7 log.go:172] (0xc001182aa0) (3) Data frame handling
I0525 11:01:41.664853       7 log.go:172] (0xc001182aa0) (3) Data frame sent
I0525 11:01:41.664882       7 log.go:172] (0xc00271d3f0) Data frame received for 3
I0525 11:01:41.664896       7 log.go:172] (0xc001182aa0) (3) Data frame handling
I0525 11:01:41.666398       7 log.go:172] (0xc00271d3f0) Data frame received for 1
I0525 11:01:41.666427       7 log.go:172] (0xc000c37040) (1) Data frame handling
I0525 11:01:41.666468       7 log.go:172] (0xc000c37040) (1) Data frame sent
I0525 11:01:41.666494       7 log.go:172] (0xc00271d3f0) (0xc000c37040) Stream removed, broadcasting: 1
I0525 11:01:41.666522       7 log.go:172] (0xc00271d3f0) Go away received
I0525 11:01:41.666730       7 log.go:172] (0xc00271d3f0) (0xc000c37040) Stream removed, broadcasting: 1
I0525 11:01:41.666761       7 log.go:172] (0xc00271d3f0) (0xc001182aa0) Stream removed, broadcasting: 3
I0525 11:01:41.666788       7 log.go:172] (0xc00271d3f0) (0xc001753b80) Stream removed, broadcasting: 5
May 25 11:01:41.666: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
May 25 11:01:41.666: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5211 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 25 11:01:41.666: INFO: >>> kubeConfig: /root/.kube/config
I0525 11:01:41.704110       7 log.go:172] (0xc002a1a370) (0xc001183680) Create stream
I0525 11:01:41.704192       7 log.go:172] (0xc002a1a370) (0xc001183680) Stream added, broadcasting: 1
I0525 11:01:41.708450       7 log.go:172] (0xc002a1a370) Reply frame received for 1
I0525 11:01:41.708488       7 log.go:172] (0xc002a1a370) (0xc000bc28c0) Create stream
I0525 11:01:41.708499       7 log.go:172] (0xc002a1a370) (0xc000bc28c0) Stream added, broadcasting: 3
I0525 11:01:41.709549       7 log.go:172] (0xc002a1a370) Reply frame received for 3
I0525 11:01:41.709574       7 log.go:172] (0xc002a1a370) (0xc000c37180) Create stream
I0525 11:01:41.709582       7 log.go:172] (0xc002a1a370) (0xc000c37180) Stream added, broadcasting: 5
I0525 11:01:41.710445       7 log.go:172] (0xc002a1a370) Reply frame received for 5
I0525 11:01:41.778513       7 log.go:172] (0xc002a1a370) Data frame received for 3
I0525 11:01:41.778544       7 log.go:172] (0xc000bc28c0) (3) Data frame handling
I0525 11:01:41.778564       7 log.go:172] (0xc000bc28c0) (3) Data frame sent
I0525 11:01:41.834928       7 log.go:172] (0xc002a1a370) Data frame received for 3
I0525 11:01:41.835011       7 log.go:172] (0xc000bc28c0) (3) Data frame handling
I0525 11:01:41.835087       7 log.go:172] (0xc002a1a370) Data frame received for 5
I0525 11:01:41.835123       7 log.go:172] (0xc000c37180) (5) Data frame handling
I0525 11:01:41.836595       7 log.go:172] (0xc002a1a370) Data frame received for 1
I0525 11:01:41.836621       7 log.go:172] (0xc001183680) (1) Data frame handling
I0525 11:01:41.836633       7 log.go:172] (0xc001183680) (1) Data frame sent
I0525 11:01:41.836649       7 log.go:172] (0xc002a1a370) (0xc001183680) Stream removed, broadcasting: 1
I0525 11:01:41.836665       7 log.go:172] (0xc002a1a370) Go away received
I0525 11:01:41.836866       7 log.go:172] (0xc002a1a370) (0xc001183680) Stream removed, broadcasting: 1
I0525 11:01:41.836899       7 log.go:172] (0xc002a1a370) (0xc000bc28c0) Stream removed, broadcasting: 3
I0525 11:01:41.836939       7 log.go:172] (0xc002a1a370) (0xc000c37180) Stream removed, broadcasting: 5
May 25 11:01:41.836: INFO: Exec stderr: ""
May 25 11:01:41.836: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5211 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 25 11:01:41.837: INFO: >>> kubeConfig: /root/.kube/config
I0525 11:01:41.876933       7 log.go:172] (0xc00187d080) (0xc001964be0) Create stream
I0525 11:01:41.876960       7 log.go:172] (0xc00187d080) (0xc001964be0) Stream added, broadcasting: 1
I0525 11:01:41.880303       7 log.go:172] (0xc00187d080) Reply frame received for 1
I0525 11:01:41.880343       7 log.go:172] (0xc00187d080) (0xc000bc2b40) Create stream
I0525 11:01:41.880355       7 log.go:172] (0xc00187d080) (0xc000bc2b40) Stream added, broadcasting: 3
I0525 11:01:41.881445       7 log.go:172] (0xc00187d080) Reply frame received for 3
I0525 11:01:41.881489       7 log.go:172] (0xc00187d080) (0xc0011837c0) Create stream
I0525 11:01:41.881502       7 log.go:172] (0xc00187d080) (0xc0011837c0) Stream added, broadcasting: 5
I0525 11:01:41.882526       7 log.go:172] (0xc00187d080) Reply frame received for 5
I0525 11:01:41.948002       7 log.go:172] (0xc00187d080) Data frame received for 3
I0525 11:01:41.948035       7 log.go:172] (0xc000bc2b40) (3) Data frame handling
I0525 11:01:41.948048       7 log.go:172] (0xc000bc2b40) (3) Data frame sent
I0525 11:01:41.948057       7 log.go:172] (0xc00187d080) Data frame received for 3
I0525 11:01:41.948075       7 log.go:172] (0xc000bc2b40) (3) Data frame handling
I0525 11:01:41.948108       7 log.go:172] (0xc00187d080) Data frame received for 5
I0525 11:01:41.948121       7 log.go:172] (0xc0011837c0) (5) Data frame handling
I0525 11:01:41.950050       7 log.go:172] (0xc00187d080) Data frame received for 1
I0525 11:01:41.950065       7 log.go:172] (0xc001964be0) (1) Data frame handling
I0525 11:01:41.950075       7 log.go:172] (0xc001964be0) (1) Data frame sent
I0525 11:01:41.950097       7 log.go:172] (0xc00187d080) (0xc001964be0) Stream removed, broadcasting: 1
I0525 11:01:41.950117       7 log.go:172] (0xc00187d080) Go away received
I0525 11:01:41.950202       7 log.go:172] (0xc00187d080) (0xc001964be0) Stream removed, broadcasting: 1
I0525 11:01:41.950220       7 log.go:172] (0xc00187d080) (0xc000bc2b40) Stream removed, broadcasting: 3
I0525 11:01:41.950228       7 log.go:172] (0xc00187d080) (0xc0011837c0) Stream removed, broadcasting: 5
May 25 11:01:41.950: INFO: Exec stderr: ""
May 25 11:01:41.950: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5211 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 25 11:01:41.950: INFO: >>> kubeConfig: /root/.kube/config
I0525 11:01:41.982933       7 log.go:172] (0xc002a1a9a0) (0xc001183e00) Create stream
I0525 11:01:41.982972       7 log.go:172] (0xc002a1a9a0) (0xc001183e00) Stream added, broadcasting: 1
I0525 11:01:41.986538       7 log.go:172] (0xc002a1a9a0) Reply frame received for 1
I0525 11:01:41.986590       7 log.go:172] (0xc002a1a9a0) (0xc001964e60) Create stream
I0525 11:01:41.986611       7 log.go:172] (0xc002a1a9a0) (0xc001964e60) Stream added, broadcasting: 3
I0525 11:01:41.987453       7 log.go:172] (0xc002a1a9a0) Reply frame received for 3
I0525 11:01:41.987485       7 log.go:172] (0xc002a1a9a0) (0xc001964f00) Create stream
I0525 11:01:41.987497       7 log.go:172] (0xc002a1a9a0) (0xc001964f00) Stream added, broadcasting: 5
I0525 11:01:41.988204       7 log.go:172] (0xc002a1a9a0) Reply frame received for 5
I0525 11:01:42.058262       7 log.go:172] (0xc002a1a9a0) Data frame received for 5
I0525 11:01:42.058283       7 log.go:172] (0xc001964f00) (5) Data frame handling
I0525 11:01:42.058319       7 log.go:172] (0xc002a1a9a0) Data frame received for 3
I0525 11:01:42.058362       7 log.go:172] (0xc001964e60) (3) Data frame handling
I0525 11:01:42.058394       7 log.go:172] (0xc001964e60) (3) Data frame sent
I0525 11:01:42.058418       7 log.go:172] (0xc002a1a9a0) Data frame received for 3
I0525 11:01:42.058427       7 log.go:172] (0xc001964e60) (3) Data frame handling
I0525 11:01:42.060069       7 log.go:172] (0xc002a1a9a0) Data frame received for 1
I0525 11:01:42.060087       7 log.go:172] (0xc001183e00) (1) Data frame handling
I0525 11:01:42.060097       7 log.go:172] (0xc001183e00) (1) Data frame sent
I0525 11:01:42.060111       7 log.go:172] (0xc002a1a9a0) (0xc001183e00) Stream removed, broadcasting: 1
I0525 11:01:42.060149       7 log.go:172] (0xc002a1a9a0) Go away received
I0525 11:01:42.060200       7 log.go:172] (0xc002a1a9a0) (0xc001183e00) Stream removed, broadcasting: 1
I0525 11:01:42.060220       7 log.go:172] (0xc002a1a9a0) (0xc001964e60) Stream removed, broadcasting: 3
I0525 11:01:42.060230       7 log.go:172] (0xc002a1a9a0) (0xc001964f00) Stream removed, broadcasting: 5
May 25 11:01:42.060: INFO: Exec stderr: ""
May 25 11:01:42.060: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5211 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 25 11:01:42.060: INFO: >>> kubeConfig: /root/.kube/config
I0525 11:01:42.092100       7 log.go:172] (0xc00271da20) (0xc000c375e0) Create stream
I0525 11:01:42.092142       7 log.go:172] (0xc00271da20) (0xc000c375e0) Stream added, broadcasting: 1
I0525 11:01:42.095778       7 log.go:172] (0xc00271da20) Reply frame received for 1
I0525 11:01:42.095819       7 log.go:172] (0xc00271da20) (0xc001965040) Create stream
I0525 11:01:42.095832       7 log.go:172] (0xc00271da20) (0xc001965040) Stream added, broadcasting: 3
I0525 11:01:42.096866       7 log.go:172] (0xc00271da20) Reply frame received for 3
I0525 11:01:42.096907       7 log.go:172] (0xc00271da20) (0xc001753d60) Create stream
I0525 11:01:42.096923       7 log.go:172] (0xc00271da20) (0xc001753d60) Stream added, broadcasting: 5
I0525 11:01:42.098193       7 log.go:172] (0xc00271da20) Reply frame received for 5
I0525 11:01:42.159092       7 log.go:172] (0xc00271da20) Data frame received for 5
I0525 11:01:42.159110       7 log.go:172] (0xc001753d60) (5) Data frame handling
I0525 11:01:42.159132       7 log.go:172] (0xc00271da20) Data frame received for 3
I0525 11:01:42.159138       7 log.go:172] (0xc001965040) (3) Data frame handling
I0525 11:01:42.159144       7 log.go:172] (0xc001965040) (3) Data frame sent
I0525 11:01:42.159150       7 log.go:172] (0xc00271da20) Data frame received for 3
I0525 11:01:42.159155       7 log.go:172] (0xc001965040) (3) Data frame handling
I0525 11:01:42.160164       7 log.go:172] (0xc00271da20) Data frame received for 1
I0525 11:01:42.160182       7 log.go:172] (0xc000c375e0) (1) Data frame handling
I0525 11:01:42.160191       7 log.go:172] (0xc000c375e0) (1) Data frame sent
I0525 11:01:42.160200       7 log.go:172] (0xc00271da20) (0xc000c375e0) Stream removed, broadcasting: 1
I0525 11:01:42.160207       7 log.go:172] (0xc00271da20) Go away received
I0525 11:01:42.160304       7 log.go:172] (0xc00271da20) (0xc000c375e0) Stream removed, broadcasting: 1
I0525 11:01:42.160316       7 log.go:172] (0xc00271da20) (0xc001965040) Stream removed, broadcasting: 3
I0525 11:01:42.160323       7 log.go:172] (0xc00271da20) (0xc001753d60) Stream removed, broadcasting: 5
May 25 11:01:42.160: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:01:42.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-5211" for this suite.

• [SLOW TEST:11.364 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":44,"skipped":846,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:01:42.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
May 25 11:01:42.505: INFO: >>> kubeConfig: /root/.kube/config
May 25 11:01:45.500: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:01:56.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4316" for this suite.

• [SLOW TEST:14.092 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":45,"skipped":862,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:01:56.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2536 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2536;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2536 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2536;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2536.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2536.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2536.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2536.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2536.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2536.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2536.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2536.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2536.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2536.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2536.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2536.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2536.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 223.68.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.68.223_udp@PTR;check="$$(dig +tcp +noall +answer +search 223.68.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.68.223_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2536 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2536;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2536 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2536;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2536.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2536.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2536.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2536.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2536.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2536.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2536.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2536.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2536.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2536.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2536.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2536.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2536.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 223.68.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.68.223_udp@PTR;check="$$(dig +tcp +noall +answer +search 223.68.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.68.223_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 25 11:02:04.612: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:04.615: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:04.618: INFO: Unable to read wheezy_udp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:04.622: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:04.625: INFO: Unable to read wheezy_udp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:04.628: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:04.631: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:04.634: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:04.674: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:04.677: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:04.681: INFO: Unable to read jessie_udp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:04.684: INFO: Unable to read jessie_tcp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:04.688: INFO: Unable to read jessie_udp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:04.691: INFO: Unable to read jessie_tcp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:04.695: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:04.698: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:04.722: INFO: Lookups using dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2536 wheezy_tcp@dns-test-service.dns-2536 wheezy_udp@dns-test-service.dns-2536.svc wheezy_tcp@dns-test-service.dns-2536.svc wheezy_udp@_http._tcp.dns-test-service.dns-2536.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2536.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2536 jessie_tcp@dns-test-service.dns-2536 jessie_udp@dns-test-service.dns-2536.svc jessie_tcp@dns-test-service.dns-2536.svc jessie_udp@_http._tcp.dns-test-service.dns-2536.svc jessie_tcp@_http._tcp.dns-test-service.dns-2536.svc]

May 25 11:02:09.794: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:09.825: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:09.829: INFO: Unable to read wheezy_udp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:09.832: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:09.835: INFO: Unable to read wheezy_udp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:09.838: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:09.841: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:09.844: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:09.864: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:09.867: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:09.869: INFO: Unable to read jessie_udp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:09.872: INFO: Unable to read jessie_tcp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:09.874: INFO: Unable to read jessie_udp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:09.877: INFO: Unable to read jessie_tcp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:09.879: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:09.882: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:09.943: INFO: Lookups using dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2536 wheezy_tcp@dns-test-service.dns-2536 wheezy_udp@dns-test-service.dns-2536.svc wheezy_tcp@dns-test-service.dns-2536.svc wheezy_udp@_http._tcp.dns-test-service.dns-2536.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2536.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2536 jessie_tcp@dns-test-service.dns-2536 jessie_udp@dns-test-service.dns-2536.svc jessie_tcp@dns-test-service.dns-2536.svc jessie_udp@_http._tcp.dns-test-service.dns-2536.svc jessie_tcp@_http._tcp.dns-test-service.dns-2536.svc]

May 25 11:02:14.728: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:14.732: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:14.736: INFO: Unable to read wheezy_udp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:14.738: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:14.741: INFO: Unable to read wheezy_udp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:14.744: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:14.747: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:14.750: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:14.773: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:14.777: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:14.780: INFO: Unable to read jessie_udp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:14.783: INFO: Unable to read jessie_tcp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:14.787: INFO: Unable to read jessie_udp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:14.790: INFO: Unable to read jessie_tcp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:14.793: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:14.797: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:14.819: INFO: Lookups using dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2536 wheezy_tcp@dns-test-service.dns-2536 wheezy_udp@dns-test-service.dns-2536.svc wheezy_tcp@dns-test-service.dns-2536.svc wheezy_udp@_http._tcp.dns-test-service.dns-2536.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2536.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2536 jessie_tcp@dns-test-service.dns-2536 jessie_udp@dns-test-service.dns-2536.svc jessie_tcp@dns-test-service.dns-2536.svc jessie_udp@_http._tcp.dns-test-service.dns-2536.svc jessie_tcp@_http._tcp.dns-test-service.dns-2536.svc]

May 25 11:02:19.727: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:19.730: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:19.733: INFO: Unable to read wheezy_udp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:19.736: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:19.739: INFO: Unable to read wheezy_udp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:19.742: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:19.745: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:19.747: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:19.770: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:19.774: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:19.777: INFO: Unable to read jessie_udp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:19.779: INFO: Unable to read jessie_tcp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:19.783: INFO: Unable to read jessie_udp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:19.787: INFO: Unable to read jessie_tcp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:19.789: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:19.793: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:19.810: INFO: Lookups using dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2536 wheezy_tcp@dns-test-service.dns-2536 wheezy_udp@dns-test-service.dns-2536.svc wheezy_tcp@dns-test-service.dns-2536.svc wheezy_udp@_http._tcp.dns-test-service.dns-2536.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2536.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2536 jessie_tcp@dns-test-service.dns-2536 jessie_udp@dns-test-service.dns-2536.svc jessie_tcp@dns-test-service.dns-2536.svc jessie_udp@_http._tcp.dns-test-service.dns-2536.svc jessie_tcp@_http._tcp.dns-test-service.dns-2536.svc]

May 25 11:02:24.727: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:24.731: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:24.733: INFO: Unable to read wheezy_udp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:24.736: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:24.739: INFO: Unable to read wheezy_udp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:24.742: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:24.745: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:24.748: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:24.768: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:24.771: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:24.774: INFO: Unable to read jessie_udp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:24.776: INFO: Unable to read jessie_tcp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:24.779: INFO: Unable to read jessie_udp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:24.782: INFO: Unable to read jessie_tcp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:24.784: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:24.787: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:24.804: INFO: Lookups using dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2536 wheezy_tcp@dns-test-service.dns-2536 wheezy_udp@dns-test-service.dns-2536.svc wheezy_tcp@dns-test-service.dns-2536.svc wheezy_udp@_http._tcp.dns-test-service.dns-2536.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2536.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2536 jessie_tcp@dns-test-service.dns-2536 jessie_udp@dns-test-service.dns-2536.svc jessie_tcp@dns-test-service.dns-2536.svc jessie_udp@_http._tcp.dns-test-service.dns-2536.svc jessie_tcp@_http._tcp.dns-test-service.dns-2536.svc]

May 25 11:02:29.728: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:29.732: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:29.735: INFO: Unable to read wheezy_udp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:29.738: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:29.741: INFO: Unable to read wheezy_udp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:29.743: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:29.746: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:29.748: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:29.768: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:29.772: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:29.775: INFO: Unable to read jessie_udp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:29.779: INFO: Unable to read jessie_tcp@dns-test-service.dns-2536 from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:29.782: INFO: Unable to read jessie_udp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:29.785: INFO: Unable to read jessie_tcp@dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:29.789: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:29.792: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2536.svc from pod dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2: the server could not find the requested resource (get pods dns-test-74885604-588b-4c3b-b72c-a41cb91500a2)
May 25 11:02:29.810: INFO: Lookups using dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2536 wheezy_tcp@dns-test-service.dns-2536 wheezy_udp@dns-test-service.dns-2536.svc wheezy_tcp@dns-test-service.dns-2536.svc wheezy_udp@_http._tcp.dns-test-service.dns-2536.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2536.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2536 jessie_tcp@dns-test-service.dns-2536 jessie_udp@dns-test-service.dns-2536.svc jessie_tcp@dns-test-service.dns-2536.svc jessie_udp@_http._tcp.dns-test-service.dns-2536.svc jessie_tcp@_http._tcp.dns-test-service.dns-2536.svc]

May 25 11:02:34.939: INFO: DNS probes using dns-2536/dns-test-74885604-588b-4c3b-b72c-a41cb91500a2 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:02:35.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2536" for this suite.

• [SLOW TEST:39.488 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":46,"skipped":881,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:02:35.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating cluster-info
May 25 11:02:35.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config cluster-info'
May 25 11:02:35.975: INFO: stderr: ""
May 25 11:02:35.975: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:02:35.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9350" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":275,"completed":47,"skipped":892,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:02:35.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:02:52.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2409" for this suite.

• [SLOW TEST:16.315 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":48,"skipped":899,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:02:52.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
May 25 11:02:52.405: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3937'
May 25 11:02:54.281: INFO: stderr: ""
May 25 11:02:54.281: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May 25 11:02:54.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3937'
May 25 11:02:54.419: INFO: stderr: ""
May 25 11:02:54.419: INFO: stdout: "update-demo-nautilus-gdpk4 update-demo-nautilus-hdxc2 "
May 25 11:02:54.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gdpk4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3937'
May 25 11:02:54.512: INFO: stderr: ""
May 25 11:02:54.513: INFO: stdout: ""
May 25 11:02:54.513: INFO: update-demo-nautilus-gdpk4 is created but not running
May 25 11:02:59.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3937'
May 25 11:02:59.625: INFO: stderr: ""
May 25 11:02:59.625: INFO: stdout: "update-demo-nautilus-gdpk4 update-demo-nautilus-hdxc2 "
May 25 11:02:59.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gdpk4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3937'
May 25 11:02:59.726: INFO: stderr: ""
May 25 11:02:59.726: INFO: stdout: "true"
May 25 11:02:59.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gdpk4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3937'
May 25 11:02:59.828: INFO: stderr: ""
May 25 11:02:59.828: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 25 11:02:59.828: INFO: validating pod update-demo-nautilus-gdpk4
May 25 11:02:59.848: INFO: got data: {
  "image": "nautilus.jpg"
}

May 25 11:02:59.849: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 25 11:02:59.849: INFO: update-demo-nautilus-gdpk4 is verified up and running
May 25 11:02:59.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hdxc2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3937'
May 25 11:02:59.946: INFO: stderr: ""
May 25 11:02:59.946: INFO: stdout: "true"
May 25 11:02:59.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hdxc2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3937'
May 25 11:03:00.041: INFO: stderr: ""
May 25 11:03:00.041: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 25 11:03:00.041: INFO: validating pod update-demo-nautilus-hdxc2
May 25 11:03:00.045: INFO: got data: {
  "image": "nautilus.jpg"
}

May 25 11:03:00.045: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 25 11:03:00.045: INFO: update-demo-nautilus-hdxc2 is verified up and running
STEP: using delete to clean up resources
May 25 11:03:00.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3937'
May 25 11:03:00.152: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 25 11:03:00.152: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
May 25 11:03:00.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3937'
May 25 11:03:00.238: INFO: stderr: "No resources found in kubectl-3937 namespace.\n"
May 25 11:03:00.238: INFO: stdout: ""
May 25 11:03:00.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3937 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 25 11:03:00.343: INFO: stderr: ""
May 25 11:03:00.343: INFO: stdout: "update-demo-nautilus-gdpk4\nupdate-demo-nautilus-hdxc2\n"
May 25 11:03:00.843: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3937'
May 25 11:03:00.948: INFO: stderr: "No resources found in kubectl-3937 namespace.\n"
May 25 11:03:00.948: INFO: stdout: ""
May 25 11:03:00.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3937 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 25 11:03:01.065: INFO: stderr: ""
May 25 11:03:01.065: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:03:01.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3937" for this suite.

• [SLOW TEST:8.774 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":275,"completed":49,"skipped":916,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:03:01.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 25 11:03:01.395: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef5120bc-c622-468b-a8a3-fdcf4a20c796" in namespace "downward-api-127" to be "Succeeded or Failed"
May 25 11:03:01.500: INFO: Pod "downwardapi-volume-ef5120bc-c622-468b-a8a3-fdcf4a20c796": Phase="Pending", Reason="", readiness=false. Elapsed: 104.522598ms
May 25 11:03:03.504: INFO: Pod "downwardapi-volume-ef5120bc-c622-468b-a8a3-fdcf4a20c796": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108692875s
May 25 11:03:05.509: INFO: Pod "downwardapi-volume-ef5120bc-c622-468b-a8a3-fdcf4a20c796": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.113715s
STEP: Saw pod success
May 25 11:03:05.509: INFO: Pod "downwardapi-volume-ef5120bc-c622-468b-a8a3-fdcf4a20c796" satisfied condition "Succeeded or Failed"
May 25 11:03:05.513: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-ef5120bc-c622-468b-a8a3-fdcf4a20c796 container client-container: 
STEP: delete the pod
May 25 11:03:05.622: INFO: Waiting for pod downwardapi-volume-ef5120bc-c622-468b-a8a3-fdcf4a20c796 to disappear
May 25 11:03:05.632: INFO: Pod downwardapi-volume-ef5120bc-c622-468b-a8a3-fdcf4a20c796 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:03:05.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-127" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":932,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:03:05.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
May 25 11:03:05.874: INFO: Waiting up to 5m0s for pod "pod-fb446e26-031c-4a12-84dc-c5631c1a04b0" in namespace "emptydir-5629" to be "Succeeded or Failed"
May 25 11:03:05.901: INFO: Pod "pod-fb446e26-031c-4a12-84dc-c5631c1a04b0": Phase="Pending", Reason="", readiness=false. Elapsed: 27.280889ms
May 25 11:03:07.906: INFO: Pod "pod-fb446e26-031c-4a12-84dc-c5631c1a04b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03151908s
May 25 11:03:09.911: INFO: Pod "pod-fb446e26-031c-4a12-84dc-c5631c1a04b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036567749s
STEP: Saw pod success
May 25 11:03:09.911: INFO: Pod "pod-fb446e26-031c-4a12-84dc-c5631c1a04b0" satisfied condition "Succeeded or Failed"
May 25 11:03:09.914: INFO: Trying to get logs from node kali-worker pod pod-fb446e26-031c-4a12-84dc-c5631c1a04b0 container test-container: 
STEP: delete the pod
May 25 11:03:10.058: INFO: Waiting for pod pod-fb446e26-031c-4a12-84dc-c5631c1a04b0 to disappear
May 25 11:03:10.089: INFO: Pod pod-fb446e26-031c-4a12-84dc-c5631c1a04b0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:03:10.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5629" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":936,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:03:10.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:03:21.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8979" for this suite.

• [SLOW TEST:11.254 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":52,"skipped":942,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:03:21.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:03:26.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9004" for this suite.

• [SLOW TEST:5.106 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":53,"skipped":999,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:03:26.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on node default medium
May 25 11:03:26.643: INFO: Waiting up to 5m0s for pod "pod-e3f394a1-7eab-49de-a80a-64018a86f544" in namespace "emptydir-3170" to be "Succeeded or Failed"
May 25 11:03:26.646: INFO: Pod "pod-e3f394a1-7eab-49de-a80a-64018a86f544": Phase="Pending", Reason="", readiness=false. Elapsed: 3.414616ms
May 25 11:03:28.689: INFO: Pod "pod-e3f394a1-7eab-49de-a80a-64018a86f544": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045842735s
May 25 11:03:30.703: INFO: Pod "pod-e3f394a1-7eab-49de-a80a-64018a86f544": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060136546s
STEP: Saw pod success
May 25 11:03:30.703: INFO: Pod "pod-e3f394a1-7eab-49de-a80a-64018a86f544" satisfied condition "Succeeded or Failed"
May 25 11:03:30.706: INFO: Trying to get logs from node kali-worker pod pod-e3f394a1-7eab-49de-a80a-64018a86f544 container test-container: 
STEP: delete the pod
May 25 11:03:30.743: INFO: Waiting for pod pod-e3f394a1-7eab-49de-a80a-64018a86f544 to disappear
May 25 11:03:30.760: INFO: Pod pod-e3f394a1-7eab-49de-a80a-64018a86f544 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:03:30.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3170" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":54,"skipped":1034,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:03:30.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating all guestbook components
May 25 11:03:31.042: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

May 25 11:03:31.042: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3731'
May 25 11:03:33.921: INFO: stderr: ""
May 25 11:03:33.921: INFO: stdout: "service/agnhost-slave created\n"
May 25 11:03:33.921: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

May 25 11:03:33.921: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3731'
May 25 11:03:36.688: INFO: stderr: ""
May 25 11:03:36.688: INFO: stdout: "service/agnhost-master created\n"
May 25 11:03:36.688: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

May 25 11:03:36.688: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3731'
May 25 11:03:39.190: INFO: stderr: ""
May 25 11:03:39.190: INFO: stdout: "service/frontend created\n"
May 25 11:03:39.190: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

May 25 11:03:39.191: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3731'
May 25 11:03:41.634: INFO: stderr: ""
May 25 11:03:41.634: INFO: stdout: "deployment.apps/frontend created\n"
May 25 11:03:41.634: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

May 25 11:03:41.634: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3731'
May 25 11:03:44.137: INFO: stderr: ""
May 25 11:03:44.137: INFO: stdout: "deployment.apps/agnhost-master created\n"
May 25 11:03:44.138: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

May 25 11:03:44.138: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3731'
May 25 11:03:46.937: INFO: stderr: ""
May 25 11:03:46.937: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
May 25 11:03:46.937: INFO: Waiting for all frontend pods to be Running.
May 25 11:03:56.988: INFO: Waiting for frontend to serve content.
May 25 11:03:56.997: INFO: Trying to add a new entry to the guestbook.
May 25 11:03:57.007: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
May 25 11:03:57.014: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3731'
May 25 11:03:57.204: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 25 11:03:57.204: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
May 25 11:03:57.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3731'
May 25 11:03:57.377: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 25 11:03:57.377: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
May 25 11:03:57.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3731'
May 25 11:03:57.522: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 25 11:03:57.522: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
May 25 11:03:57.522: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3731'
May 25 11:03:57.622: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 25 11:03:57.622: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
May 25 11:03:57.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3731'
May 25 11:03:58.332: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 25 11:03:58.332: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
May 25 11:03:58.332: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3731'
May 25 11:03:58.773: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 25 11:03:58.773: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:03:58.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3731" for this suite.

• [SLOW TEST:28.536 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":275,"completed":55,"skipped":1035,"failed":0}
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:03:59.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:04:18.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5381" for this suite.

• [SLOW TEST:19.465 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":56,"skipped":1035,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:04:18.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-9411
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 25 11:04:18.820: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 25 11:04:18.928: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 25 11:04:20.933: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 25 11:04:22.992: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 25 11:04:24.933: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:04:26.933: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:04:28.932: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:04:30.933: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:04:32.943: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:04:34.932: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:04:36.950: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:04:39.057: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:04:40.932: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:04:42.934: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 25 11:04:42.942: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 25 11:04:46.974: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.98:8080/dial?request=hostname&protocol=udp&host=10.244.2.97&port=8081&tries=1'] Namespace:pod-network-test-9411 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 25 11:04:46.974: INFO: >>> kubeConfig: /root/.kube/config
I0525 11:04:47.011684       7 log.go:172] (0xc00271c630) (0xc001b28960) Create stream
I0525 11:04:47.011740       7 log.go:172] (0xc00271c630) (0xc001b28960) Stream added, broadcasting: 1
I0525 11:04:47.014640       7 log.go:172] (0xc00271c630) Reply frame received for 1
I0525 11:04:47.014717       7 log.go:172] (0xc00271c630) (0xc00169d900) Create stream
I0525 11:04:47.014833       7 log.go:172] (0xc00271c630) (0xc00169d900) Stream added, broadcasting: 3
I0525 11:04:47.015899       7 log.go:172] (0xc00271c630) Reply frame received for 3
I0525 11:04:47.015946       7 log.go:172] (0xc00271c630) (0xc001964a00) Create stream
I0525 11:04:47.015962       7 log.go:172] (0xc00271c630) (0xc001964a00) Stream added, broadcasting: 5
I0525 11:04:47.016975       7 log.go:172] (0xc00271c630) Reply frame received for 5
I0525 11:04:47.176401       7 log.go:172] (0xc00271c630) Data frame received for 3
I0525 11:04:47.176432       7 log.go:172] (0xc00169d900) (3) Data frame handling
I0525 11:04:47.176460       7 log.go:172] (0xc00169d900) (3) Data frame sent
I0525 11:04:47.178045       7 log.go:172] (0xc00271c630) Data frame received for 3
I0525 11:04:47.178071       7 log.go:172] (0xc00169d900) (3) Data frame handling
I0525 11:04:47.178105       7 log.go:172] (0xc00271c630) Data frame received for 5
I0525 11:04:47.178138       7 log.go:172] (0xc001964a00) (5) Data frame handling
I0525 11:04:47.180365       7 log.go:172] (0xc00271c630) Data frame received for 1
I0525 11:04:47.180392       7 log.go:172] (0xc001b28960) (1) Data frame handling
I0525 11:04:47.180450       7 log.go:172] (0xc001b28960) (1) Data frame sent
I0525 11:04:47.180525       7 log.go:172] (0xc00271c630) (0xc001b28960) Stream removed, broadcasting: 1
I0525 11:04:47.180622       7 log.go:172] (0xc00271c630) Go away received
I0525 11:04:47.180669       7 log.go:172] (0xc00271c630) (0xc001b28960) Stream removed, broadcasting: 1
I0525 11:04:47.180683       7 log.go:172] (0xc00271c630) (0xc00169d900) Stream removed, broadcasting: 3
I0525 11:04:47.180699       7 log.go:172] (0xc00271c630) (0xc001964a00) Stream removed, broadcasting: 5
May 25 11:04:47.180: INFO: Waiting for responses: map[]
May 25 11:04:47.195: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.98:8080/dial?request=hostname&protocol=udp&host=10.244.1.98&port=8081&tries=1'] Namespace:pod-network-test-9411 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 25 11:04:47.195: INFO: >>> kubeConfig: /root/.kube/config
I0525 11:04:47.260091       7 log.go:172] (0xc0029a1810) (0xc001965220) Create stream
I0525 11:04:47.260130       7 log.go:172] (0xc0029a1810) (0xc001965220) Stream added, broadcasting: 1
I0525 11:04:47.262699       7 log.go:172] (0xc0029a1810) Reply frame received for 1
I0525 11:04:47.262760       7 log.go:172] (0xc0029a1810) (0xc00169d9a0) Create stream
I0525 11:04:47.262777       7 log.go:172] (0xc0029a1810) (0xc00169d9a0) Stream added, broadcasting: 3
I0525 11:04:47.263945       7 log.go:172] (0xc0029a1810) Reply frame received for 3
I0525 11:04:47.264004       7 log.go:172] (0xc0029a1810) (0xc00169dae0) Create stream
I0525 11:04:47.264028       7 log.go:172] (0xc0029a1810) (0xc00169dae0) Stream added, broadcasting: 5
I0525 11:04:47.264942       7 log.go:172] (0xc0029a1810) Reply frame received for 5
I0525 11:04:47.332254       7 log.go:172] (0xc0029a1810) Data frame received for 3
I0525 11:04:47.332293       7 log.go:172] (0xc00169d9a0) (3) Data frame handling
I0525 11:04:47.332335       7 log.go:172] (0xc00169d9a0) (3) Data frame sent
I0525 11:04:47.332822       7 log.go:172] (0xc0029a1810) Data frame received for 3
I0525 11:04:47.332877       7 log.go:172] (0xc0029a1810) Data frame received for 5
I0525 11:04:47.332919       7 log.go:172] (0xc00169dae0) (5) Data frame handling
I0525 11:04:47.332957       7 log.go:172] (0xc00169d9a0) (3) Data frame handling
I0525 11:04:47.335616       7 log.go:172] (0xc0029a1810) Data frame received for 1
I0525 11:04:47.335651       7 log.go:172] (0xc001965220) (1) Data frame handling
I0525 11:04:47.335688       7 log.go:172] (0xc001965220) (1) Data frame sent
I0525 11:04:47.335734       7 log.go:172] (0xc0029a1810) (0xc001965220) Stream removed, broadcasting: 1
I0525 11:04:47.335765       7 log.go:172] (0xc0029a1810) Go away received
I0525 11:04:47.335921       7 log.go:172] (0xc0029a1810) (0xc001965220) Stream removed, broadcasting: 1
I0525 11:04:47.335962       7 log.go:172] (0xc0029a1810) (0xc00169d9a0) Stream removed, broadcasting: 3
I0525 11:04:47.335983       7 log.go:172] (0xc0029a1810) (0xc00169dae0) Stream removed, broadcasting: 5
May 25 11:04:47.336: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:04:47.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9411" for this suite.

• [SLOW TEST:28.574 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":1056,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:04:47.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
May 25 11:04:47.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1396'
May 25 11:04:50.688: INFO: stderr: ""
May 25 11:04:50.688: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May 25 11:04:50.688: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1396'
May 25 11:04:50.790: INFO: stderr: ""
May 25 11:04:50.790: INFO: stdout: "update-demo-nautilus-8dzdk update-demo-nautilus-jrszh "
May 25 11:04:50.790: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8dzdk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1396'
May 25 11:04:51.002: INFO: stderr: ""
May 25 11:04:51.002: INFO: stdout: ""
May 25 11:04:51.002: INFO: update-demo-nautilus-8dzdk is created but not running
May 25 11:04:56.003: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1396'
May 25 11:04:56.118: INFO: stderr: ""
May 25 11:04:56.118: INFO: stdout: "update-demo-nautilus-8dzdk update-demo-nautilus-jrszh "
May 25 11:04:56.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8dzdk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1396'
May 25 11:04:56.269: INFO: stderr: ""
May 25 11:04:56.269: INFO: stdout: "true"
May 25 11:04:56.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8dzdk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1396'
May 25 11:04:56.665: INFO: stderr: ""
May 25 11:04:56.665: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 25 11:04:56.665: INFO: validating pod update-demo-nautilus-8dzdk
May 25 11:04:56.963: INFO: got data: {
  "image": "nautilus.jpg"
}

May 25 11:04:56.963: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 25 11:04:56.963: INFO: update-demo-nautilus-8dzdk is verified up and running
May 25 11:04:56.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jrszh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1396'
May 25 11:04:57.054: INFO: stderr: ""
May 25 11:04:57.054: INFO: stdout: "true"
May 25 11:04:57.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jrszh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1396'
May 25 11:04:57.141: INFO: stderr: ""
May 25 11:04:57.141: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 25 11:04:57.141: INFO: validating pod update-demo-nautilus-jrszh
May 25 11:04:57.180: INFO: got data: {
  "image": "nautilus.jpg"
}

May 25 11:04:57.180: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 25 11:04:57.180: INFO: update-demo-nautilus-jrszh is verified up and running
STEP: scaling down the replication controller
May 25 11:04:57.184: INFO: scanned /root for discovery docs: 
May 25 11:04:57.184: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1396'
May 25 11:04:58.383: INFO: stderr: ""
May 25 11:04:58.383: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May 25 11:04:58.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1396'
May 25 11:04:58.500: INFO: stderr: ""
May 25 11:04:58.500: INFO: stdout: "update-demo-nautilus-8dzdk update-demo-nautilus-jrszh "
STEP: Replicas for name=update-demo: expected=1 actual=2
May 25 11:05:03.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1396'
May 25 11:05:03.613: INFO: stderr: ""
May 25 11:05:03.613: INFO: stdout: "update-demo-nautilus-8dzdk update-demo-nautilus-jrszh "
STEP: Replicas for name=update-demo: expected=1 actual=2
May 25 11:05:08.614: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1396'
May 25 11:05:08.715: INFO: stderr: ""
May 25 11:05:08.715: INFO: stdout: "update-demo-nautilus-8dzdk "
May 25 11:05:08.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8dzdk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1396'
May 25 11:05:08.812: INFO: stderr: ""
May 25 11:05:08.813: INFO: stdout: "true"
May 25 11:05:08.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8dzdk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1396'
May 25 11:05:08.903: INFO: stderr: ""
May 25 11:05:08.903: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 25 11:05:08.903: INFO: validating pod update-demo-nautilus-8dzdk
May 25 11:05:08.907: INFO: got data: {
  "image": "nautilus.jpg"
}

May 25 11:05:08.907: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 25 11:05:08.907: INFO: update-demo-nautilus-8dzdk is verified up and running
STEP: scaling up the replication controller
May 25 11:05:08.910: INFO: scanned /root for discovery docs: 
May 25 11:05:08.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1396'
May 25 11:05:10.180: INFO: stderr: ""
May 25 11:05:10.180: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May 25 11:05:10.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1396'
May 25 11:05:10.471: INFO: stderr: ""
May 25 11:05:10.471: INFO: stdout: "update-demo-nautilus-5cwwt update-demo-nautilus-8dzdk "
May 25 11:05:10.471: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5cwwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1396'
May 25 11:05:10.942: INFO: stderr: ""
May 25 11:05:10.942: INFO: stdout: ""
May 25 11:05:10.942: INFO: update-demo-nautilus-5cwwt is created but not running
May 25 11:05:15.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1396'
May 25 11:05:16.036: INFO: stderr: ""
May 25 11:05:16.036: INFO: stdout: "update-demo-nautilus-5cwwt update-demo-nautilus-8dzdk "
May 25 11:05:16.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5cwwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1396'
May 25 11:05:16.125: INFO: stderr: ""
May 25 11:05:16.125: INFO: stdout: "true"
May 25 11:05:16.125: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5cwwt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1396'
May 25 11:05:16.211: INFO: stderr: ""
May 25 11:05:16.211: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 25 11:05:16.211: INFO: validating pod update-demo-nautilus-5cwwt
May 25 11:05:16.216: INFO: got data: {
  "image": "nautilus.jpg"
}

May 25 11:05:16.216: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 25 11:05:16.216: INFO: update-demo-nautilus-5cwwt is verified up and running
May 25 11:05:16.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8dzdk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1396'
May 25 11:05:16.306: INFO: stderr: ""
May 25 11:05:16.306: INFO: stdout: "true"
May 25 11:05:16.306: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8dzdk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1396'
May 25 11:05:16.405: INFO: stderr: ""
May 25 11:05:16.405: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 25 11:05:16.405: INFO: validating pod update-demo-nautilus-8dzdk
May 25 11:05:16.409: INFO: got data: {
  "image": "nautilus.jpg"
}

May 25 11:05:16.410: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 25 11:05:16.410: INFO: update-demo-nautilus-8dzdk is verified up and running
STEP: using delete to clean up resources
May 25 11:05:16.410: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1396'
May 25 11:05:16.543: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 25 11:05:16.543: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
May 25 11:05:16.543: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1396'
May 25 11:05:16.667: INFO: stderr: "No resources found in kubectl-1396 namespace.\n"
May 25 11:05:16.667: INFO: stdout: ""
May 25 11:05:16.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1396 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 25 11:05:16.835: INFO: stderr: ""
May 25 11:05:16.835: INFO: stdout: "update-demo-nautilus-5cwwt\nupdate-demo-nautilus-8dzdk\n"
May 25 11:05:17.335: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1396'
May 25 11:05:17.491: INFO: stderr: "No resources found in kubectl-1396 namespace.\n"
May 25 11:05:17.491: INFO: stdout: ""
May 25 11:05:17.491: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1396 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 25 11:05:17.591: INFO: stderr: ""
May 25 11:05:17.591: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:05:17.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1396" for this suite.

• [SLOW TEST:30.254 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":275,"completed":58,"skipped":1066,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:05:17.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
May 25 11:05:17.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9456'
May 25 11:05:18.045: INFO: stderr: ""
May 25 11:05:18.045: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423
May 25 11:05:18.123: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9456'
May 25 11:05:23.792: INFO: stderr: ""
May 25 11:05:23.793: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:05:23.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9456" for this suite.

• [SLOW TEST:6.208 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":275,"completed":59,"skipped":1099,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:05:23.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0525 11:05:25.006862       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 25 11:05:25.006: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:05:25.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4729" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":60,"skipped":1102,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:05:25.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service nodeport-test with type=NodePort in namespace services-5932
STEP: creating replication controller nodeport-test in namespace services-5932
I0525 11:05:25.301974       7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-5932, replica count: 2
I0525 11:05:28.352568       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0525 11:05:31.352838       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May 25 11:05:31.353: INFO: Creating new exec pod
May 25 11:05:36.375: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5932 execpodgl24l -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
May 25 11:05:36.630: INFO: stderr: "I0525 11:05:36.517331    1202 log.go:172] (0xc0003b4160) (0xc0008d60a0) Create stream\nI0525 11:05:36.517406    1202 log.go:172] (0xc0003b4160) (0xc0008d60a0) Stream added, broadcasting: 1\nI0525 11:05:36.520042    1202 log.go:172] (0xc0003b4160) Reply frame received for 1\nI0525 11:05:36.520068    1202 log.go:172] (0xc0003b4160) (0xc0006dd400) Create stream\nI0525 11:05:36.520076    1202 log.go:172] (0xc0003b4160) (0xc0006dd400) Stream added, broadcasting: 3\nI0525 11:05:36.521097    1202 log.go:172] (0xc0003b4160) Reply frame received for 3\nI0525 11:05:36.521377    1202 log.go:172] (0xc0003b4160) (0xc0008d6140) Create stream\nI0525 11:05:36.521398    1202 log.go:172] (0xc0003b4160) (0xc0008d6140) Stream added, broadcasting: 5\nI0525 11:05:36.522371    1202 log.go:172] (0xc0003b4160) Reply frame received for 5\nI0525 11:05:36.598124    1202 log.go:172] (0xc0003b4160) Data frame received for 5\nI0525 11:05:36.598158    1202 log.go:172] (0xc0008d6140) (5) Data frame handling\nI0525 11:05:36.598180    1202 log.go:172] (0xc0008d6140) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0525 11:05:36.622046    1202 log.go:172] (0xc0003b4160) Data frame received for 5\nI0525 11:05:36.622080    1202 log.go:172] (0xc0008d6140) (5) Data frame handling\nI0525 11:05:36.622092    1202 log.go:172] (0xc0008d6140) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0525 11:05:36.622415    1202 log.go:172] (0xc0003b4160) Data frame received for 5\nI0525 11:05:36.622430    1202 log.go:172] (0xc0008d6140) (5) Data frame handling\nI0525 11:05:36.622449    1202 log.go:172] (0xc0003b4160) Data frame received for 3\nI0525 11:05:36.622457    1202 log.go:172] (0xc0006dd400) (3) Data frame handling\nI0525 11:05:36.624534    1202 log.go:172] (0xc0003b4160) Data frame received for 1\nI0525 11:05:36.624552    1202 log.go:172] (0xc0008d60a0) (1) Data frame handling\nI0525 11:05:36.624563    1202 log.go:172] (0xc0008d60a0) (1) Data frame sent\nI0525 11:05:36.624583    1202 log.go:172] (0xc0003b4160) (0xc0008d60a0) Stream removed, broadcasting: 1\nI0525 11:05:36.624605    1202 log.go:172] (0xc0003b4160) Go away received\nI0525 11:05:36.625368    1202 log.go:172] (0xc0003b4160) (0xc0008d60a0) Stream removed, broadcasting: 1\nI0525 11:05:36.625397    1202 log.go:172] (0xc0003b4160) (0xc0006dd400) Stream removed, broadcasting: 3\nI0525 11:05:36.625410    1202 log.go:172] (0xc0003b4160) (0xc0008d6140) Stream removed, broadcasting: 5\n"
May 25 11:05:36.631: INFO: stdout: ""
May 25 11:05:36.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5932 execpodgl24l -- /bin/sh -x -c nc -zv -t -w 2 10.110.213.177 80'
May 25 11:05:36.868: INFO: stderr: "I0525 11:05:36.759718    1222 log.go:172] (0xc000946160) (0xc0006c9400) Create stream\nI0525 11:05:36.759789    1222 log.go:172] (0xc000946160) (0xc0006c9400) Stream added, broadcasting: 1\nI0525 11:05:36.762741    1222 log.go:172] (0xc000946160) Reply frame received for 1\nI0525 11:05:36.762785    1222 log.go:172] (0xc000946160) (0xc000a5a000) Create stream\nI0525 11:05:36.762798    1222 log.go:172] (0xc000946160) (0xc000a5a000) Stream added, broadcasting: 3\nI0525 11:05:36.763982    1222 log.go:172] (0xc000946160) Reply frame received for 3\nI0525 11:05:36.764017    1222 log.go:172] (0xc000946160) (0xc0006c94a0) Create stream\nI0525 11:05:36.764031    1222 log.go:172] (0xc000946160) (0xc0006c94a0) Stream added, broadcasting: 5\nI0525 11:05:36.765390    1222 log.go:172] (0xc000946160) Reply frame received for 5\nI0525 11:05:36.859895    1222 log.go:172] (0xc000946160) Data frame received for 5\nI0525 11:05:36.859933    1222 log.go:172] (0xc0006c94a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.213.177 80\nConnection to 10.110.213.177 80 port [tcp/http] succeeded!\nI0525 11:05:36.859968    1222 log.go:172] (0xc000946160) Data frame received for 3\nI0525 11:05:36.859998    1222 log.go:172] (0xc000a5a000) (3) Data frame handling\nI0525 11:05:36.860069    1222 log.go:172] (0xc0006c94a0) (5) Data frame sent\nI0525 11:05:36.860114    1222 log.go:172] (0xc000946160) Data frame received for 5\nI0525 11:05:36.860177    1222 log.go:172] (0xc0006c94a0) (5) Data frame handling\nI0525 11:05:36.861996    1222 log.go:172] (0xc000946160) Data frame received for 1\nI0525 11:05:36.862016    1222 log.go:172] (0xc0006c9400) (1) Data frame handling\nI0525 11:05:36.862035    1222 log.go:172] (0xc0006c9400) (1) Data frame sent\nI0525 11:05:36.862066    1222 log.go:172] (0xc000946160) (0xc0006c9400) Stream removed, broadcasting: 1\nI0525 11:05:36.862091    1222 log.go:172] (0xc000946160) Go away received\nI0525 11:05:36.862510    1222 log.go:172] (0xc000946160) (0xc0006c9400) Stream removed, broadcasting: 1\nI0525 11:05:36.862527    1222 log.go:172] (0xc000946160) (0xc000a5a000) Stream removed, broadcasting: 3\nI0525 11:05:36.862536    1222 log.go:172] (0xc000946160) (0xc0006c94a0) Stream removed, broadcasting: 5\n"
May 25 11:05:36.868: INFO: stdout: ""
May 25 11:05:36.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5932 execpodgl24l -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 31844'
May 25 11:05:37.066: INFO: stderr: "I0525 11:05:36.994730    1242 log.go:172] (0xc000a8a0b0) (0xc0004ecc80) Create stream\nI0525 11:05:36.994782    1242 log.go:172] (0xc000a8a0b0) (0xc0004ecc80) Stream added, broadcasting: 1\nI0525 11:05:36.997588    1242 log.go:172] (0xc000a8a0b0) Reply frame received for 1\nI0525 11:05:36.997628    1242 log.go:172] (0xc000a8a0b0) (0xc000c4a000) Create stream\nI0525 11:05:36.997640    1242 log.go:172] (0xc000a8a0b0) (0xc000c4a000) Stream added, broadcasting: 3\nI0525 11:05:36.998684    1242 log.go:172] (0xc000a8a0b0) Reply frame received for 3\nI0525 11:05:36.998710    1242 log.go:172] (0xc000a8a0b0) (0xc0002f6000) Create stream\nI0525 11:05:36.998722    1242 log.go:172] (0xc000a8a0b0) (0xc0002f6000) Stream added, broadcasting: 5\nI0525 11:05:36.999557    1242 log.go:172] (0xc000a8a0b0) Reply frame received for 5\nI0525 11:05:37.057650    1242 log.go:172] (0xc000a8a0b0) Data frame received for 5\nI0525 11:05:37.057696    1242 log.go:172] (0xc0002f6000) (5) Data frame handling\nI0525 11:05:37.057732    1242 log.go:172] (0xc0002f6000) (5) Data frame sent\nI0525 11:05:37.057754    1242 log.go:172] (0xc000a8a0b0) Data frame received for 5\nI0525 11:05:37.057771    1242 log.go:172] (0xc0002f6000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.15 31844\nConnection to 172.17.0.15 31844 port [tcp/31844] succeeded!\nI0525 11:05:37.057798    1242 log.go:172] (0xc000a8a0b0) Data frame received for 3\nI0525 11:05:37.057817    1242 log.go:172] (0xc000c4a000) (3) Data frame handling\nI0525 11:05:37.059316    1242 log.go:172] (0xc000a8a0b0) Data frame received for 1\nI0525 11:05:37.059345    1242 log.go:172] (0xc0004ecc80) (1) Data frame handling\nI0525 11:05:37.059359    1242 log.go:172] (0xc0004ecc80) (1) Data frame sent\nI0525 11:05:37.059378    1242 log.go:172] (0xc000a8a0b0) (0xc0004ecc80) Stream removed, broadcasting: 1\nI0525 11:05:37.059401    1242 log.go:172] (0xc000a8a0b0) Go away received\nI0525 11:05:37.059842    1242 log.go:172] (0xc000a8a0b0) (0xc0004ecc80) Stream removed, broadcasting: 1\nI0525 11:05:37.059869    1242 log.go:172] (0xc000a8a0b0) (0xc000c4a000) Stream removed, broadcasting: 3\nI0525 11:05:37.059883    1242 log.go:172] (0xc000a8a0b0) (0xc0002f6000) Stream removed, broadcasting: 5\n"
May 25 11:05:37.066: INFO: stdout: ""
May 25 11:05:37.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5932 execpodgl24l -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 31844'
May 25 11:05:37.315: INFO: stderr: "I0525 11:05:37.242477    1262 log.go:172] (0xc000974bb0) (0xc0006b7540) Create stream\nI0525 11:05:37.242653    1262 log.go:172] (0xc000974bb0) (0xc0006b7540) Stream added, broadcasting: 1\nI0525 11:05:37.245673    1262 log.go:172] (0xc000974bb0) Reply frame received for 1\nI0525 11:05:37.245748    1262 log.go:172] (0xc000974bb0) (0xc0009c4000) Create stream\nI0525 11:05:37.245770    1262 log.go:172] (0xc000974bb0) (0xc0009c4000) Stream added, broadcasting: 3\nI0525 11:05:37.246806    1262 log.go:172] (0xc000974bb0) Reply frame received for 3\nI0525 11:05:37.246846    1262 log.go:172] (0xc000974bb0) (0xc000614000) Create stream\nI0525 11:05:37.246863    1262 log.go:172] (0xc000974bb0) (0xc000614000) Stream added, broadcasting: 5\nI0525 11:05:37.248137    1262 log.go:172] (0xc000974bb0) Reply frame received for 5\nI0525 11:05:37.308687    1262 log.go:172] (0xc000974bb0) Data frame received for 3\nI0525 11:05:37.308715    1262 log.go:172] (0xc0009c4000) (3) Data frame handling\nI0525 11:05:37.308738    1262 log.go:172] (0xc000974bb0) Data frame received for 5\nI0525 11:05:37.308746    1262 log.go:172] (0xc000614000) (5) Data frame handling\nI0525 11:05:37.308757    1262 log.go:172] (0xc000614000) (5) Data frame sent\nI0525 11:05:37.308763    1262 log.go:172] (0xc000974bb0) Data frame received for 5\nI0525 11:05:37.308767    1262 log.go:172] (0xc000614000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 31844\nConnection to 172.17.0.18 31844 port [tcp/31844] succeeded!\nI0525 11:05:37.310550    1262 log.go:172] (0xc000974bb0) Data frame received for 1\nI0525 11:05:37.310568    1262 log.go:172] (0xc0006b7540) (1) Data frame handling\nI0525 11:05:37.310578    1262 log.go:172] (0xc0006b7540) (1) Data frame sent\nI0525 11:05:37.310586    1262 log.go:172] (0xc000974bb0) (0xc0006b7540) Stream removed, broadcasting: 1\nI0525 11:05:37.310741    1262 log.go:172] (0xc000974bb0) Go away received\nI0525 11:05:37.310828    1262 log.go:172] (0xc000974bb0) (0xc0006b7540) Stream removed, broadcasting: 1\nI0525 11:05:37.310840    1262 log.go:172] (0xc000974bb0) (0xc0009c4000) Stream removed, broadcasting: 3\nI0525 11:05:37.310845    1262 log.go:172] (0xc000974bb0) (0xc000614000) Stream removed, broadcasting: 5\n"
May 25 11:05:37.315: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:05:37.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5932" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:12.307 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":61,"skipped":1118,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:05:37.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 25 11:05:37.443: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b815c7e5-76a3-4a03-8521-25fd263d9277" in namespace "projected-8522" to be "Succeeded or Failed"
May 25 11:05:37.449: INFO: Pod "downwardapi-volume-b815c7e5-76a3-4a03-8521-25fd263d9277": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086301ms
May 25 11:05:39.630: INFO: Pod "downwardapi-volume-b815c7e5-76a3-4a03-8521-25fd263d9277": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187109011s
May 25 11:05:41.695: INFO: Pod "downwardapi-volume-b815c7e5-76a3-4a03-8521-25fd263d9277": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.252706933s
STEP: Saw pod success
May 25 11:05:41.696: INFO: Pod "downwardapi-volume-b815c7e5-76a3-4a03-8521-25fd263d9277" satisfied condition "Succeeded or Failed"
May 25 11:05:41.698: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-b815c7e5-76a3-4a03-8521-25fd263d9277 container client-container: 
STEP: delete the pod
May 25 11:05:42.199: INFO: Waiting for pod downwardapi-volume-b815c7e5-76a3-4a03-8521-25fd263d9277 to disappear
May 25 11:05:42.240: INFO: Pod downwardapi-volume-b815c7e5-76a3-4a03-8521-25fd263d9277 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:05:42.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8522" for this suite.

• [SLOW TEST:5.109 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":62,"skipped":1120,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:05:42.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-e37fe2c7-a366-42e8-b72e-a5dccd2aa910
STEP: Creating a pod to test consume configMaps
May 25 11:05:42.721: INFO: Waiting up to 5m0s for pod "pod-configmaps-104a18b5-8950-4f76-9b33-8e0141d8740a" in namespace "configmap-5938" to be "Succeeded or Failed"
May 25 11:05:42.761: INFO: Pod "pod-configmaps-104a18b5-8950-4f76-9b33-8e0141d8740a": Phase="Pending", Reason="", readiness=false. Elapsed: 39.499962ms
May 25 11:05:44.777: INFO: Pod "pod-configmaps-104a18b5-8950-4f76-9b33-8e0141d8740a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055692624s
May 25 11:05:46.848: INFO: Pod "pod-configmaps-104a18b5-8950-4f76-9b33-8e0141d8740a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.126609574s
STEP: Saw pod success
May 25 11:05:46.848: INFO: Pod "pod-configmaps-104a18b5-8950-4f76-9b33-8e0141d8740a" satisfied condition "Succeeded or Failed"
May 25 11:05:46.850: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-104a18b5-8950-4f76-9b33-8e0141d8740a container configmap-volume-test: 
STEP: delete the pod
May 25 11:05:47.159: INFO: Waiting for pod pod-configmaps-104a18b5-8950-4f76-9b33-8e0141d8740a to disappear
May 25 11:05:47.162: INFO: Pod pod-configmaps-104a18b5-8950-4f76-9b33-8e0141d8740a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:05:47.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5938" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":1128,"failed":0}

------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:05:47.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
May 25 11:05:47.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:06:04.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8288" for this suite.

• [SLOW TEST:17.552 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":64,"skipped":1128,"failed":0}
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:06:04.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May 25 11:06:04.877: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:06:12.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7415" for this suite.

• [SLOW TEST:8.044 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":65,"skipped":1134,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:06:12.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:06:17.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3487" for this suite.

• [SLOW TEST:5.136 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":66,"skipped":1185,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:06:17.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
May 25 11:06:25.520: INFO: 3 pods remaining
May 25 11:06:25.520: INFO: 0 pods has nil DeletionTimestamp
May 25 11:06:25.520: INFO: 
May 25 11:06:27.218: INFO: 0 pods remaining
May 25 11:06:27.218: INFO: 0 pods has nil DeletionTimestamp
May 25 11:06:27.218: INFO: 
STEP: Gathering metrics
W0525 11:06:28.804394       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 25 11:06:28.804: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:06:28.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9108" for this suite.

• [SLOW TEST:11.236 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":67,"skipped":1188,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:06:29.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-fb4349df-7523-4820-b576-c89b7b07e063
STEP: Creating a pod to test consume secrets
May 25 11:06:29.687: INFO: Waiting up to 5m0s for pod "pod-secrets-911cba7d-589e-427d-901f-3e59bffef074" in namespace "secrets-8753" to be "Succeeded or Failed"
May 25 11:06:29.765: INFO: Pod "pod-secrets-911cba7d-589e-427d-901f-3e59bffef074": Phase="Pending", Reason="", readiness=false. Elapsed: 77.433362ms
May 25 11:06:31.769: INFO: Pod "pod-secrets-911cba7d-589e-427d-901f-3e59bffef074": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082301646s
May 25 11:06:33.774: INFO: Pod "pod-secrets-911cba7d-589e-427d-901f-3e59bffef074": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086594873s
STEP: Saw pod success
May 25 11:06:33.774: INFO: Pod "pod-secrets-911cba7d-589e-427d-901f-3e59bffef074" satisfied condition "Succeeded or Failed"
May 25 11:06:33.777: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-911cba7d-589e-427d-901f-3e59bffef074 container secret-volume-test: 
STEP: delete the pod
May 25 11:06:33.802: INFO: Waiting for pod pod-secrets-911cba7d-589e-427d-901f-3e59bffef074 to disappear
May 25 11:06:33.836: INFO: Pod pod-secrets-911cba7d-589e-427d-901f-3e59bffef074 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:06:33.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8753" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":1196,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:06:33.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting the proxy server
May 25 11:06:33.894: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:06:33.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-399" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":275,"completed":69,"skipped":1216,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:06:33.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-4623
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
May 25 11:06:34.102: INFO: Found 0 stateful pods, waiting for 3
May 25 11:06:44.231: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May 25 11:06:44.231: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May 25 11:06:44.231: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
May 25 11:06:54.107: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May 25 11:06:54.107: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May 25 11:06:54.107: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
May 25 11:06:54.137: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
May 25 11:07:04.251: INFO: Updating stateful set ss2
May 25 11:07:04.307: INFO: Waiting for Pod statefulset-4623/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
May 25 11:07:15.712: INFO: Found 2 stateful pods, waiting for 3
May 25 11:07:25.718: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May 25 11:07:25.718: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May 25 11:07:25.718: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
May 25 11:07:25.742: INFO: Updating stateful set ss2
May 25 11:07:25.803: INFO: Waiting for Pod statefulset-4623/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May 25 11:07:35.981: INFO: Updating stateful set ss2
May 25 11:07:36.126: INFO: Waiting for StatefulSet statefulset-4623/ss2 to complete update
May 25 11:07:36.126: INFO: Waiting for Pod statefulset-4623/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May 25 11:07:46.135: INFO: Deleting all statefulset in ns statefulset-4623
May 25 11:07:46.138: INFO: Scaling statefulset ss2 to 0
May 25 11:08:16.182: INFO: Waiting for statefulset status.replicas updated to 0
May 25 11:08:16.185: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:08:16.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4623" for this suite.

• [SLOW TEST:102.222 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":70,"skipped":1235,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:08:16.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 25 11:08:16.340: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9bc08d5f-0757-48c9-9227-8b3df5862031" in namespace "projected-4100" to be "Succeeded or Failed"
May 25 11:08:16.360: INFO: Pod "downwardapi-volume-9bc08d5f-0757-48c9-9227-8b3df5862031": Phase="Pending", Reason="", readiness=false. Elapsed: 20.449405ms
May 25 11:08:18.364: INFO: Pod "downwardapi-volume-9bc08d5f-0757-48c9-9227-8b3df5862031": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024346771s
May 25 11:08:20.401: INFO: Pod "downwardapi-volume-9bc08d5f-0757-48c9-9227-8b3df5862031": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060795946s
May 25 11:08:22.404: INFO: Pod "downwardapi-volume-9bc08d5f-0757-48c9-9227-8b3df5862031": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064238106s
STEP: Saw pod success
May 25 11:08:22.404: INFO: Pod "downwardapi-volume-9bc08d5f-0757-48c9-9227-8b3df5862031" satisfied condition "Succeeded or Failed"
May 25 11:08:22.407: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-9bc08d5f-0757-48c9-9227-8b3df5862031 container client-container: 
STEP: delete the pod
May 25 11:08:22.476: INFO: Waiting for pod downwardapi-volume-9bc08d5f-0757-48c9-9227-8b3df5862031 to disappear
May 25 11:08:22.486: INFO: Pod downwardapi-volume-9bc08d5f-0757-48c9-9227-8b3df5862031 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:08:22.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4100" for this suite.

• [SLOW TEST:6.280 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":71,"skipped":1246,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:08:22.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
May 25 11:08:22.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4204'
May 25 11:08:28.265: INFO: stderr: ""
May 25 11:08:28.265: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
May 25 11:08:29.269: INFO: Selector matched 1 pods for map[app:agnhost]
May 25 11:08:29.269: INFO: Found 0 / 1
May 25 11:08:30.271: INFO: Selector matched 1 pods for map[app:agnhost]
May 25 11:08:30.271: INFO: Found 0 / 1
May 25 11:08:31.269: INFO: Selector matched 1 pods for map[app:agnhost]
May 25 11:08:31.269: INFO: Found 0 / 1
May 25 11:08:32.270: INFO: Selector matched 1 pods for map[app:agnhost]
May 25 11:08:32.270: INFO: Found 1 / 1
May 25 11:08:32.270: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
May 25 11:08:32.299: INFO: Selector matched 1 pods for map[app:agnhost]
May 25 11:08:32.299: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May 25 11:08:32.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config patch pod agnhost-master-995sj --namespace=kubectl-4204 -p {"metadata":{"annotations":{"x":"y"}}}'
May 25 11:08:32.391: INFO: stderr: ""
May 25 11:08:32.391: INFO: stdout: "pod/agnhost-master-995sj patched\n"
STEP: checking annotations
May 25 11:08:32.448: INFO: Selector matched 1 pods for map[app:agnhost]
May 25 11:08:32.448: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:08:32.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4204" for this suite.

• [SLOW TEST:9.964 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":275,"completed":72,"skipped":1250,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:08:32.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap that has name configmap-test-emptyKey-6aefd222-d42a-437c-8433-e1a2eacfa99a
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:08:32.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-430" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":73,"skipped":1280,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:08:32.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-c879c84c-59e7-464f-9eee-9784a1bd9a5d
STEP: Creating a pod to test consume configMaps
May 25 11:08:32.650: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ada2879e-10f2-4f46-b316-09fa9d4925b7" in namespace "projected-3285" to be "Succeeded or Failed"
May 25 11:08:32.667: INFO: Pod "pod-projected-configmaps-ada2879e-10f2-4f46-b316-09fa9d4925b7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.241051ms
May 25 11:08:34.671: INFO: Pod "pod-projected-configmaps-ada2879e-10f2-4f46-b316-09fa9d4925b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020438915s
May 25 11:08:36.675: INFO: Pod "pod-projected-configmaps-ada2879e-10f2-4f46-b316-09fa9d4925b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024756679s
STEP: Saw pod success
May 25 11:08:36.675: INFO: Pod "pod-projected-configmaps-ada2879e-10f2-4f46-b316-09fa9d4925b7" satisfied condition "Succeeded or Failed"
May 25 11:08:36.678: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-ada2879e-10f2-4f46-b316-09fa9d4925b7 container projected-configmap-volume-test: 
STEP: delete the pod
May 25 11:08:36.716: INFO: Waiting for pod pod-projected-configmaps-ada2879e-10f2-4f46-b316-09fa9d4925b7 to disappear
May 25 11:08:36.726: INFO: Pod pod-projected-configmaps-ada2879e-10f2-4f46-b316-09fa9d4925b7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:08:36.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3285" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":74,"skipped":1305,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:08:36.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-221c9b8f-25b7-436e-b9b6-5584fd3c87cb
STEP: Creating a pod to test consume secrets
May 25 11:08:36.826: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-abfff38e-d668-4318-9424-85f38205c9d4" in namespace "projected-2397" to be "Succeeded or Failed"
May 25 11:08:37.042: INFO: Pod "pod-projected-secrets-abfff38e-d668-4318-9424-85f38205c9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 215.87289ms
May 25 11:08:39.074: INFO: Pod "pod-projected-secrets-abfff38e-d668-4318-9424-85f38205c9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248350552s
May 25 11:08:41.079: INFO: Pod "pod-projected-secrets-abfff38e-d668-4318-9424-85f38205c9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.253167467s
May 25 11:08:43.084: INFO: Pod "pod-projected-secrets-abfff38e-d668-4318-9424-85f38205c9d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.258001s
STEP: Saw pod success
May 25 11:08:43.084: INFO: Pod "pod-projected-secrets-abfff38e-d668-4318-9424-85f38205c9d4" satisfied condition "Succeeded or Failed"
May 25 11:08:43.087: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-abfff38e-d668-4318-9424-85f38205c9d4 container projected-secret-volume-test: 
STEP: delete the pod
May 25 11:08:43.156: INFO: Waiting for pod pod-projected-secrets-abfff38e-d668-4318-9424-85f38205c9d4 to disappear
May 25 11:08:43.163: INFO: Pod pod-projected-secrets-abfff38e-d668-4318-9424-85f38205c9d4 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:08:43.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2397" for this suite.

• [SLOW TEST:6.439 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1313,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:08:43.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:08:43.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
May 25 11:08:43.844: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-25T11:08:43Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-25T11:08:43Z]] name:name1 resourceVersion:7168418 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:fab714f5-f580-4bd1-84dd-99b2f598fd36] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
May 25 11:08:53.850: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-25T11:08:53Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-25T11:08:53Z]] name:name2 resourceVersion:7168459 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:57a239ad-fa16-4403-af6b-a10d7f6f18ad] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
May 25 11:09:03.859: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-25T11:08:43Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-25T11:09:03Z]] name:name1 resourceVersion:7168487 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:fab714f5-f580-4bd1-84dd-99b2f598fd36] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
May 25 11:09:13.866: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-25T11:08:53Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-25T11:09:13Z]] name:name2 resourceVersion:7168517 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:57a239ad-fa16-4403-af6b-a10d7f6f18ad] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
May 25 11:09:23.875: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-25T11:08:43Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-25T11:09:03Z]] name:name1 resourceVersion:7168547 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:fab714f5-f580-4bd1-84dd-99b2f598fd36] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
May 25 11:09:33.883: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-25T11:08:53Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-25T11:09:13Z]] name:name2 resourceVersion:7168577 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:57a239ad-fa16-4403-af6b-a10d7f6f18ad] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:09:44.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-8238" for this suite.

• [SLOW TEST:61.232 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":76,"skipped":1332,"failed":0}
SSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:09:44.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-a4a2db24-874d-4e48-a933-49f71dd01dbc in namespace container-probe-7445
May 25 11:09:48.541: INFO: Started pod liveness-a4a2db24-874d-4e48-a933-49f71dd01dbc in namespace container-probe-7445
STEP: checking the pod's current state and verifying that restartCount is present
May 25 11:09:48.545: INFO: Initial restart count of pod liveness-a4a2db24-874d-4e48-a933-49f71dd01dbc is 0
May 25 11:10:02.598: INFO: Restart count of pod container-probe-7445/liveness-a4a2db24-874d-4e48-a933-49f71dd01dbc is now 1 (14.05360596s elapsed)
May 25 11:10:22.644: INFO: Restart count of pod container-probe-7445/liveness-a4a2db24-874d-4e48-a933-49f71dd01dbc is now 2 (34.099455721s elapsed)
May 25 11:10:42.716: INFO: Restart count of pod container-probe-7445/liveness-a4a2db24-874d-4e48-a933-49f71dd01dbc is now 3 (54.171680423s elapsed)
May 25 11:11:02.770: INFO: Restart count of pod container-probe-7445/liveness-a4a2db24-874d-4e48-a933-49f71dd01dbc is now 4 (1m14.225514619s elapsed)
May 25 11:12:04.914: INFO: Restart count of pod container-probe-7445/liveness-a4a2db24-874d-4e48-a933-49f71dd01dbc is now 5 (2m16.369631837s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:12:04.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7445" for this suite.

• [SLOW TEST:140.560 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1336,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:12:04.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 25 11:12:05.003: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1eda9f71-e2d1-4ba7-8a88-b4c8d5433475" in namespace "downward-api-6924" to be "Succeeded or Failed"
May 25 11:12:05.115: INFO: Pod "downwardapi-volume-1eda9f71-e2d1-4ba7-8a88-b4c8d5433475": Phase="Pending", Reason="", readiness=false. Elapsed: 112.17746ms
May 25 11:12:07.170: INFO: Pod "downwardapi-volume-1eda9f71-e2d1-4ba7-8a88-b4c8d5433475": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166795871s
May 25 11:12:09.174: INFO: Pod "downwardapi-volume-1eda9f71-e2d1-4ba7-8a88-b4c8d5433475": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.170364094s
STEP: Saw pod success
May 25 11:12:09.174: INFO: Pod "downwardapi-volume-1eda9f71-e2d1-4ba7-8a88-b4c8d5433475" satisfied condition "Succeeded or Failed"
May 25 11:12:09.176: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-1eda9f71-e2d1-4ba7-8a88-b4c8d5433475 container client-container: 
STEP: delete the pod
May 25 11:12:09.417: INFO: Waiting for pod downwardapi-volume-1eda9f71-e2d1-4ba7-8a88-b4c8d5433475 to disappear
May 25 11:12:09.467: INFO: Pod downwardapi-volume-1eda9f71-e2d1-4ba7-8a88-b4c8d5433475 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:12:09.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6924" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":78,"skipped":1349,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:12:09.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-75eea9d5-cf01-40ae-8817-9fc854c816fc
STEP: Creating secret with name s-test-opt-upd-fc10379f-0061-4e7a-8235-061f3643bb1d
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-75eea9d5-cf01-40ae-8817-9fc854c816fc
STEP: Updating secret s-test-opt-upd-fc10379f-0061-4e7a-8235-061f3643bb1d
STEP: Creating secret with name s-test-opt-create-ae304807-82d7-454b-8592-0f3abb4ce57c
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:12:17.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6122" for this suite.

• [SLOW TEST:8.288 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":79,"skipped":1357,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:12:17.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
May 25 11:12:17.819: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config api-versions'
May 25 11:12:18.037: INFO: stderr: ""
May 25 11:12:18.037: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:12:18.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3013" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":275,"completed":80,"skipped":1366,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:12:18.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
May 25 11:12:18.189: INFO: Waiting up to 5m0s for pod "pod-ac19d737-cb7c-4ea4-9f24-8245b83affd8" in namespace "emptydir-7713" to be "Succeeded or Failed"
May 25 11:12:18.207: INFO: Pod "pod-ac19d737-cb7c-4ea4-9f24-8245b83affd8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.996338ms
May 25 11:12:20.356: INFO: Pod "pod-ac19d737-cb7c-4ea4-9f24-8245b83affd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167159776s
May 25 11:12:22.361: INFO: Pod "pod-ac19d737-cb7c-4ea4-9f24-8245b83affd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.171531121s
STEP: Saw pod success
May 25 11:12:22.361: INFO: Pod "pod-ac19d737-cb7c-4ea4-9f24-8245b83affd8" satisfied condition "Succeeded or Failed"
May 25 11:12:22.364: INFO: Trying to get logs from node kali-worker pod pod-ac19d737-cb7c-4ea4-9f24-8245b83affd8 container test-container: 
STEP: delete the pod
May 25 11:12:22.462: INFO: Waiting for pod pod-ac19d737-cb7c-4ea4-9f24-8245b83affd8 to disappear
May 25 11:12:22.475: INFO: Pod pod-ac19d737-cb7c-4ea4-9f24-8245b83affd8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:12:22.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7713" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":81,"skipped":1376,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:12:22.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-16
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-16
I0525 11:12:22.766196       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-16, replica count: 2
I0525 11:12:25.816827       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0525 11:12:28.817073       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May 25 11:12:28.817: INFO: Creating new exec pod
May 25 11:12:33.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-16 execpodzthpm -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
May 25 11:12:34.103: INFO: stderr: "I0525 11:12:33.994347    1375 log.go:172] (0xc000a2a790) (0xc0006d1540) Create stream\nI0525 11:12:33.994425    1375 log.go:172] (0xc000a2a790) (0xc0006d1540) Stream added, broadcasting: 1\nI0525 11:12:33.997953    1375 log.go:172] (0xc000a2a790) Reply frame received for 1\nI0525 11:12:33.997999    1375 log.go:172] (0xc000a2a790) (0xc0009ca000) Create stream\nI0525 11:12:33.998010    1375 log.go:172] (0xc000a2a790) (0xc0009ca000) Stream added, broadcasting: 3\nI0525 11:12:33.999144    1375 log.go:172] (0xc000a2a790) Reply frame received for 3\nI0525 11:12:33.999182    1375 log.go:172] (0xc000a2a790) (0xc0006d15e0) Create stream\nI0525 11:12:33.999195    1375 log.go:172] (0xc000a2a790) (0xc0006d15e0) Stream added, broadcasting: 5\nI0525 11:12:34.000112    1375 log.go:172] (0xc000a2a790) Reply frame received for 5\nI0525 11:12:34.089034    1375 log.go:172] (0xc000a2a790) Data frame received for 5\nI0525 11:12:34.089068    1375 log.go:172] (0xc0006d15e0) (5) Data frame handling\nI0525 11:12:34.089094    1375 log.go:172] (0xc0006d15e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0525 11:12:34.089900    1375 log.go:172] (0xc000a2a790) Data frame received for 5\nI0525 11:12:34.089929    1375 log.go:172] (0xc0006d15e0) (5) Data frame handling\nI0525 11:12:34.089958    1375 log.go:172] (0xc0006d15e0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0525 11:12:34.090309    1375 log.go:172] (0xc000a2a790) Data frame received for 5\nI0525 11:12:34.090339    1375 log.go:172] (0xc0006d15e0) (5) Data frame handling\nI0525 11:12:34.090372    1375 log.go:172] (0xc000a2a790) Data frame received for 3\nI0525 11:12:34.090400    1375 log.go:172] (0xc0009ca000) (3) Data frame handling\nI0525 11:12:34.097794    1375 log.go:172] (0xc000a2a790) Data frame received for 1\nI0525 11:12:34.097822    1375 log.go:172] (0xc0006d1540) (1) Data frame handling\nI0525 11:12:34.097832    1375 log.go:172] (0xc0006d1540) (1) Data frame sent\nI0525 11:12:34.097844    1375 log.go:172] (0xc000a2a790) (0xc0006d1540) Stream removed, broadcasting: 1\nI0525 11:12:34.098156    1375 log.go:172] (0xc000a2a790) (0xc0006d1540) Stream removed, broadcasting: 1\nI0525 11:12:34.098173    1375 log.go:172] (0xc000a2a790) (0xc0009ca000) Stream removed, broadcasting: 3\nI0525 11:12:34.098181    1375 log.go:172] (0xc000a2a790) (0xc0006d15e0) Stream removed, broadcasting: 5\n"
May 25 11:12:34.103: INFO: stdout: ""
May 25 11:12:34.103: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-16 execpodzthpm -- /bin/sh -x -c nc -zv -t -w 2 10.108.78.27 80'
May 25 11:12:34.307: INFO: stderr: "I0525 11:12:34.235852    1396 log.go:172] (0xc00003be40) (0xc0006774a0) Create stream\nI0525 11:12:34.235911    1396 log.go:172] (0xc00003be40) (0xc0006774a0) Stream added, broadcasting: 1\nI0525 11:12:34.239109    1396 log.go:172] (0xc00003be40) Reply frame received for 1\nI0525 11:12:34.239172    1396 log.go:172] (0xc00003be40) (0xc00024e000) Create stream\nI0525 11:12:34.239187    1396 log.go:172] (0xc00003be40) (0xc00024e000) Stream added, broadcasting: 3\nI0525 11:12:34.240466    1396 log.go:172] (0xc00003be40) Reply frame received for 3\nI0525 11:12:34.240537    1396 log.go:172] (0xc00003be40) (0xc00030a000) Create stream\nI0525 11:12:34.240571    1396 log.go:172] (0xc00003be40) (0xc00030a000) Stream added, broadcasting: 5\nI0525 11:12:34.242019    1396 log.go:172] (0xc00003be40) Reply frame received for 5\nI0525 11:12:34.300803    1396 log.go:172] (0xc00003be40) Data frame received for 5\nI0525 11:12:34.300836    1396 log.go:172] (0xc00030a000) (5) Data frame handling\nI0525 11:12:34.300876    1396 log.go:172] (0xc00030a000) (5) Data frame sent\n+ nc -zv -t -w 2 10.108.78.27 80\nConnection to 10.108.78.27 80 port [tcp/http] succeeded!\nI0525 11:12:34.301056    1396 log.go:172] (0xc00003be40) Data frame received for 3\nI0525 11:12:34.301080    1396 log.go:172] (0xc00024e000) (3) Data frame handling\nI0525 11:12:34.301325    1396 log.go:172] (0xc00003be40) Data frame received for 5\nI0525 11:12:34.301355    1396 log.go:172] (0xc00030a000) (5) Data frame handling\nI0525 11:12:34.302510    1396 log.go:172] (0xc00003be40) Data frame received for 1\nI0525 11:12:34.302522    1396 log.go:172] (0xc0006774a0) (1) Data frame handling\nI0525 11:12:34.302529    1396 log.go:172] (0xc0006774a0) (1) Data frame sent\nI0525 11:12:34.302644    1396 log.go:172] (0xc00003be40) (0xc0006774a0) Stream removed, broadcasting: 1\nI0525 11:12:34.302935    1396 log.go:172] (0xc00003be40) Go away received\nI0525 11:12:34.303099    1396 log.go:172] (0xc00003be40) (0xc0006774a0) Stream removed, broadcasting: 1\nI0525 11:12:34.303117    1396 log.go:172] (0xc00003be40) (0xc00024e000) Stream removed, broadcasting: 3\nI0525 11:12:34.303127    1396 log.go:172] (0xc00003be40) (0xc00030a000) Stream removed, broadcasting: 5\n"
May 25 11:12:34.307: INFO: stdout: ""
May 25 11:12:34.307: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:12:34.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-16" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:11.840 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":82,"skipped":1395,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:12:34.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7350.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7350.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 25 11:12:40.905: INFO: DNS probes using dns-7350/dns-test-880ed258-5f0d-47cd-8a27-4a0b6406dc83 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:12:40.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7350" for this suite.

• [SLOW TEST:6.894 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":275,"completed":83,"skipped":1415,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:12:41.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-a3d10a4a-db3f-41cf-ba0c-381f3b9e5a0f in namespace container-probe-2012
May 25 11:12:47.532: INFO: Started pod liveness-a3d10a4a-db3f-41cf-ba0c-381f3b9e5a0f in namespace container-probe-2012
STEP: checking the pod's current state and verifying that restartCount is present
May 25 11:12:47.535: INFO: Initial restart count of pod liveness-a3d10a4a-db3f-41cf-ba0c-381f3b9e5a0f is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:16:48.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2012" for this suite.

• [SLOW TEST:247.230 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":84,"skipped":1433,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:16:48.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-5496
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-5496
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5496
May 25 11:16:48.999: INFO: Found 0 stateful pods, waiting for 1
May 25 11:16:59.004: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
May 25 11:16:59.007: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5496 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 25 11:16:59.251: INFO: stderr: "I0525 11:16:59.133476    1420 log.go:172] (0xc0003c89a0) (0xc0009003c0) Create stream\nI0525 11:16:59.133534    1420 log.go:172] (0xc0003c89a0) (0xc0009003c0) Stream added, broadcasting: 1\nI0525 11:16:59.136539    1420 log.go:172] (0xc0003c89a0) Reply frame received for 1\nI0525 11:16:59.136584    1420 log.go:172] (0xc0003c89a0) (0xc0003faaa0) Create stream\nI0525 11:16:59.136601    1420 log.go:172] (0xc0003c89a0) (0xc0003faaa0) Stream added, broadcasting: 3\nI0525 11:16:59.137783    1420 log.go:172] (0xc0003c89a0) Reply frame received for 3\nI0525 11:16:59.137807    1420 log.go:172] (0xc0003c89a0) (0xc000900460) Create stream\nI0525 11:16:59.137815    1420 log.go:172] (0xc0003c89a0) (0xc000900460) Stream added, broadcasting: 5\nI0525 11:16:59.138710    1420 log.go:172] (0xc0003c89a0) Reply frame received for 5\nI0525 11:16:59.194017    1420 log.go:172] (0xc0003c89a0) Data frame received for 5\nI0525 11:16:59.194050    1420 log.go:172] (0xc000900460) (5) Data frame handling\nI0525 11:16:59.194070    1420 log.go:172] (0xc000900460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 11:16:59.243195    1420 log.go:172] (0xc0003c89a0) Data frame received for 3\nI0525 11:16:59.243223    1420 log.go:172] (0xc0003faaa0) (3) Data frame handling\nI0525 11:16:59.243244    1420 log.go:172] (0xc0003faaa0) (3) Data frame sent\nI0525 11:16:59.243282    1420 log.go:172] (0xc0003c89a0) Data frame received for 5\nI0525 11:16:59.243309    1420 log.go:172] (0xc0003c89a0) Data frame received for 3\nI0525 11:16:59.243326    1420 log.go:172] (0xc0003faaa0) (3) Data frame handling\nI0525 11:16:59.243343    1420 log.go:172] (0xc000900460) (5) Data frame handling\nI0525 11:16:59.245304    1420 log.go:172] (0xc0003c89a0) Data frame received for 1\nI0525 11:16:59.245350    1420 log.go:172] (0xc0009003c0) (1) Data frame handling\nI0525 11:16:59.245383    1420 log.go:172] (0xc0009003c0) (1) Data frame sent\nI0525 11:16:59.245414    1420 log.go:172] (0xc0003c89a0) (0xc0009003c0) Stream removed, broadcasting: 1\nI0525 11:16:59.245433    1420 log.go:172] (0xc0003c89a0) Go away received\nI0525 11:16:59.245790    1420 log.go:172] (0xc0003c89a0) (0xc0009003c0) Stream removed, broadcasting: 1\nI0525 11:16:59.245805    1420 log.go:172] (0xc0003c89a0) (0xc0003faaa0) Stream removed, broadcasting: 3\nI0525 11:16:59.245813    1420 log.go:172] (0xc0003c89a0) (0xc000900460) Stream removed, broadcasting: 5\n"
May 25 11:16:59.251: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 25 11:16:59.251: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 25 11:16:59.254: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
May 25 11:17:09.258: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
May 25 11:17:09.258: INFO: Waiting for statefulset status.replicas updated to 0
May 25 11:17:09.280: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999546s
May 25 11:17:10.285: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.987117025s
May 25 11:17:11.543: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.981807365s
May 25 11:17:12.548: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.723647578s
May 25 11:17:13.552: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.718777298s
May 25 11:17:14.557: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.714566175s
May 25 11:17:15.562: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.709325955s
May 25 11:17:16.567: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.704611988s
May 25 11:17:17.572: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.699644366s
May 25 11:17:18.576: INFO: Verifying statefulset ss doesn't scale past 1 for another 695.154005ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5496
May 25 11:17:19.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5496 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 11:17:19.869: INFO: stderr: "I0525 11:17:19.781991    1443 log.go:172] (0xc00003a580) (0xc000a6c0a0) Create stream\nI0525 11:17:19.782073    1443 log.go:172] (0xc00003a580) (0xc000a6c0a0) Stream added, broadcasting: 1\nI0525 11:17:19.785563    1443 log.go:172] (0xc00003a580) Reply frame received for 1\nI0525 11:17:19.785621    1443 log.go:172] (0xc00003a580) (0xc000a5a140) Create stream\nI0525 11:17:19.785640    1443 log.go:172] (0xc00003a580) (0xc000a5a140) Stream added, broadcasting: 3\nI0525 11:17:19.787617    1443 log.go:172] (0xc00003a580) Reply frame received for 3\nI0525 11:17:19.787658    1443 log.go:172] (0xc00003a580) (0xc0004bb680) Create stream\nI0525 11:17:19.787675    1443 log.go:172] (0xc00003a580) (0xc0004bb680) Stream added, broadcasting: 5\nI0525 11:17:19.788444    1443 log.go:172] (0xc00003a580) Reply frame received for 5\nI0525 11:17:19.861390    1443 log.go:172] (0xc00003a580) Data frame received for 3\nI0525 11:17:19.861510    1443 log.go:172] (0xc000a5a140) (3) Data frame handling\nI0525 11:17:19.861529    1443 log.go:172] (0xc000a5a140) (3) Data frame sent\nI0525 11:17:19.861558    1443 log.go:172] (0xc00003a580) Data frame received for 5\nI0525 11:17:19.861593    1443 log.go:172] (0xc0004bb680) (5) Data frame handling\nI0525 11:17:19.861613    1443 log.go:172] (0xc0004bb680) (5) Data frame sent\nI0525 11:17:19.861635    1443 log.go:172] (0xc00003a580) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 11:17:19.861653    1443 log.go:172] (0xc00003a580) Data frame received for 3\nI0525 11:17:19.861683    1443 log.go:172] (0xc000a5a140) (3) Data frame handling\nI0525 11:17:19.861702    1443 log.go:172] (0xc0004bb680) (5) Data frame handling\nI0525 11:17:19.863022    1443 log.go:172] (0xc00003a580) Data frame received for 1\nI0525 11:17:19.863054    1443 log.go:172] (0xc000a6c0a0) (1) Data frame handling\nI0525 11:17:19.863077    1443 log.go:172] (0xc000a6c0a0) (1) Data frame sent\nI0525 11:17:19.863096    1443 log.go:172] (0xc00003a580) (0xc000a6c0a0) Stream removed, broadcasting: 1\nI0525 11:17:19.863114    1443 log.go:172] (0xc00003a580) Go away received\nI0525 11:17:19.863439    1443 log.go:172] (0xc00003a580) (0xc000a6c0a0) Stream removed, broadcasting: 1\nI0525 11:17:19.863459    1443 log.go:172] (0xc00003a580) (0xc000a5a140) Stream removed, broadcasting: 3\nI0525 11:17:19.863475    1443 log.go:172] (0xc00003a580) (0xc0004bb680) Stream removed, broadcasting: 5\n"
May 25 11:17:19.869: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 25 11:17:19.869: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 25 11:17:19.914: INFO: Found 1 stateful pods, waiting for 3
May 25 11:17:29.918: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
May 25 11:17:29.918: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
May 25 11:17:29.918: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
May 25 11:17:29.925: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5496 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 25 11:17:30.143: INFO: stderr: "I0525 11:17:30.042878    1466 log.go:172] (0xc000b1cdc0) (0xc000b103c0) Create stream\nI0525 11:17:30.042926    1466 log.go:172] (0xc000b1cdc0) (0xc000b103c0) Stream added, broadcasting: 1\nI0525 11:17:30.044900    1466 log.go:172] (0xc000b1cdc0) Reply frame received for 1\nI0525 11:17:30.044930    1466 log.go:172] (0xc000b1cdc0) (0xc000b10460) Create stream\nI0525 11:17:30.044939    1466 log.go:172] (0xc000b1cdc0) (0xc000b10460) Stream added, broadcasting: 3\nI0525 11:17:30.045834    1466 log.go:172] (0xc000b1cdc0) Reply frame received for 3\nI0525 11:17:30.045880    1466 log.go:172] (0xc000b1cdc0) (0xc000b10500) Create stream\nI0525 11:17:30.045899    1466 log.go:172] (0xc000b1cdc0) (0xc000b10500) Stream added, broadcasting: 5\nI0525 11:17:30.046538    1466 log.go:172] (0xc000b1cdc0) Reply frame received for 5\nI0525 11:17:30.134654    1466 log.go:172] (0xc000b1cdc0) Data frame received for 5\nI0525 11:17:30.134710    1466 log.go:172] (0xc000b10500) (5) Data frame handling\nI0525 11:17:30.134725    1466 log.go:172] (0xc000b10500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 11:17:30.134765    1466 log.go:172] (0xc000b1cdc0) Data frame received for 3\nI0525 11:17:30.134808    1466 log.go:172] (0xc000b10460) (3) Data frame handling\nI0525 11:17:30.134832    1466 log.go:172] (0xc000b10460) (3) Data frame sent\nI0525 11:17:30.134856    1466 log.go:172] (0xc000b1cdc0) Data frame received for 3\nI0525 11:17:30.134892    1466 log.go:172] (0xc000b1cdc0) Data frame received for 5\nI0525 11:17:30.134915    1466 log.go:172] (0xc000b10500) (5) Data frame handling\nI0525 11:17:30.134939    1466 log.go:172] (0xc000b10460) (3) Data frame handling\nI0525 11:17:30.136195    1466 log.go:172] (0xc000b1cdc0) Data frame received for 1\nI0525 11:17:30.136240    1466 log.go:172] (0xc000b103c0) (1) Data frame handling\nI0525 11:17:30.136289    1466 log.go:172] (0xc000b103c0) (1) Data frame sent\nI0525 11:17:30.136347    1466 log.go:172] (0xc000b1cdc0) (0xc000b103c0) Stream removed, broadcasting: 1\nI0525 11:17:30.136460    1466 log.go:172] (0xc000b1cdc0) Go away received\nI0525 11:17:30.137880    1466 log.go:172] (0xc000b1cdc0) (0xc000b103c0) Stream removed, broadcasting: 1\nI0525 11:17:30.137982    1466 log.go:172] (0xc000b1cdc0) (0xc000b10460) Stream removed, broadcasting: 3\nI0525 11:17:30.138061    1466 log.go:172] (0xc000b1cdc0) (0xc000b10500) Stream removed, broadcasting: 5\n"
May 25 11:17:30.143: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 25 11:17:30.143: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 25 11:17:30.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5496 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 25 11:17:30.399: INFO: stderr: "I0525 11:17:30.304191    1487 log.go:172] (0xc0000e8370) (0xc000a3a000) Create stream\nI0525 11:17:30.304269    1487 log.go:172] (0xc0000e8370) (0xc000a3a000) Stream added, broadcasting: 1\nI0525 11:17:30.306317    1487 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0525 11:17:30.306367    1487 log.go:172] (0xc0000e8370) (0xc000340000) Create stream\nI0525 11:17:30.306379    1487 log.go:172] (0xc0000e8370) (0xc000340000) Stream added, broadcasting: 3\nI0525 11:17:30.307332    1487 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0525 11:17:30.307369    1487 log.go:172] (0xc0000e8370) (0xc000a3a0a0) Create stream\nI0525 11:17:30.307378    1487 log.go:172] (0xc0000e8370) (0xc000a3a0a0) Stream added, broadcasting: 5\nI0525 11:17:30.308455    1487 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0525 11:17:30.373014    1487 log.go:172] (0xc0000e8370) Data frame received for 5\nI0525 11:17:30.373053    1487 log.go:172] (0xc000a3a0a0) (5) Data frame handling\nI0525 11:17:30.373071    1487 log.go:172] (0xc000a3a0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 11:17:30.390424    1487 log.go:172] (0xc0000e8370) Data frame received for 5\nI0525 11:17:30.390460    1487 log.go:172] (0xc000a3a0a0) (5) Data frame handling\nI0525 11:17:30.390488    1487 log.go:172] (0xc0000e8370) Data frame received for 3\nI0525 11:17:30.390531    1487 log.go:172] (0xc000340000) (3) Data frame handling\nI0525 11:17:30.390577    1487 log.go:172] (0xc000340000) (3) Data frame sent\nI0525 11:17:30.390608    1487 log.go:172] (0xc0000e8370) Data frame received for 3\nI0525 11:17:30.390627    1487 log.go:172] (0xc000340000) (3) Data frame handling\nI0525 11:17:30.392917    1487 log.go:172] (0xc0000e8370) Data frame received for 1\nI0525 11:17:30.392932    1487 log.go:172] (0xc000a3a000) (1) Data frame handling\nI0525 11:17:30.392943    1487 log.go:172] (0xc000a3a000) (1) Data frame sent\nI0525 11:17:30.393074    1487 log.go:172] (0xc0000e8370) (0xc000a3a000) Stream removed, broadcasting: 1\nI0525 11:17:30.393291    1487 log.go:172] (0xc0000e8370) Go away received\nI0525 11:17:30.393542    1487 log.go:172] (0xc0000e8370) (0xc000a3a000) Stream removed, broadcasting: 1\nI0525 11:17:30.393557    1487 log.go:172] (0xc0000e8370) (0xc000340000) Stream removed, broadcasting: 3\nI0525 11:17:30.393563    1487 log.go:172] (0xc0000e8370) (0xc000a3a0a0) Stream removed, broadcasting: 5\n"
May 25 11:17:30.399: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 25 11:17:30.399: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 25 11:17:30.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5496 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 25 11:17:30.639: INFO: stderr: "I0525 11:17:30.520221    1510 log.go:172] (0xc000af2bb0) (0xc000adc140) Create stream\nI0525 11:17:30.520261    1510 log.go:172] (0xc000af2bb0) (0xc000adc140) Stream added, broadcasting: 1\nI0525 11:17:30.522419    1510 log.go:172] (0xc000af2bb0) Reply frame received for 1\nI0525 11:17:30.522470    1510 log.go:172] (0xc000af2bb0) (0xc0006cb2c0) Create stream\nI0525 11:17:30.522497    1510 log.go:172] (0xc000af2bb0) (0xc0006cb2c0) Stream added, broadcasting: 3\nI0525 11:17:30.523265    1510 log.go:172] (0xc000af2bb0) Reply frame received for 3\nI0525 11:17:30.523296    1510 log.go:172] (0xc000af2bb0) (0xc0001d6000) Create stream\nI0525 11:17:30.523304    1510 log.go:172] (0xc000af2bb0) (0xc0001d6000) Stream added, broadcasting: 5\nI0525 11:17:30.523991    1510 log.go:172] (0xc000af2bb0) Reply frame received for 5\nI0525 11:17:30.581095    1510 log.go:172] (0xc000af2bb0) Data frame received for 5\nI0525 11:17:30.581324    1510 log.go:172] (0xc0001d6000) (5) Data frame handling\nI0525 11:17:30.581350    1510 log.go:172] (0xc0001d6000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 11:17:30.630560    1510 log.go:172] (0xc000af2bb0) Data frame received for 3\nI0525 11:17:30.630698    1510 log.go:172] (0xc0006cb2c0) (3) Data frame handling\nI0525 11:17:30.630714    1510 log.go:172] (0xc0006cb2c0) (3) Data frame sent\nI0525 11:17:30.630723    1510 log.go:172] (0xc000af2bb0) Data frame received for 3\nI0525 11:17:30.630730    1510 log.go:172] (0xc0006cb2c0) (3) Data frame handling\nI0525 11:17:30.630773    1510 log.go:172] (0xc000af2bb0) Data frame received for 5\nI0525 11:17:30.630816    1510 log.go:172] (0xc0001d6000) (5) Data frame handling\nI0525 11:17:30.633029    1510 log.go:172] (0xc000af2bb0) Data frame received for 1\nI0525 11:17:30.633043    1510 log.go:172] (0xc000adc140) (1) Data frame handling\nI0525 11:17:30.633050    1510 log.go:172] (0xc000adc140) (1) Data frame sent\nI0525 11:17:30.633058    1510 log.go:172] (0xc000af2bb0) (0xc000adc140) Stream removed, broadcasting: 1\nI0525 11:17:30.633066    1510 log.go:172] (0xc000af2bb0) Go away received\nI0525 11:17:30.633538    1510 log.go:172] (0xc000af2bb0) (0xc000adc140) Stream removed, broadcasting: 1\nI0525 11:17:30.633564    1510 log.go:172] (0xc000af2bb0) (0xc0006cb2c0) Stream removed, broadcasting: 3\nI0525 11:17:30.633578    1510 log.go:172] (0xc000af2bb0) (0xc0001d6000) Stream removed, broadcasting: 5\n"
May 25 11:17:30.639: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 25 11:17:30.639: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 25 11:17:30.639: INFO: Waiting for statefulset status.replicas updated to 0
May 25 11:17:30.642: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
May 25 11:17:40.650: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
May 25 11:17:40.650: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
May 25 11:17:40.650: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
May 25 11:17:40.667: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999471s
May 25 11:17:41.673: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991654719s
May 25 11:17:42.679: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985548786s
May 25 11:17:43.685: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.97983267s
May 25 11:17:44.689: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.973980025s
May 25 11:17:45.693: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.969423207s
May 25 11:17:46.699: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.965265721s
May 25 11:17:47.704: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.959781627s
May 25 11:17:48.709: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.954837482s
May 25 11:17:49.713: INFO: Verifying statefulset ss doesn't scale past 3 for another 949.366137ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5496
May 25 11:17:50.717: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5496 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 11:17:50.946: INFO: stderr: "I0525 11:17:50.848493    1531 log.go:172] (0xc000b849a0) (0xc000831540) Create stream\nI0525 11:17:50.848547    1531 log.go:172] (0xc000b849a0) (0xc000831540) Stream added, broadcasting: 1\nI0525 11:17:50.851008    1531 log.go:172] (0xc000b849a0) Reply frame received for 1\nI0525 11:17:50.851056    1531 log.go:172] (0xc000b849a0) (0xc0009ac000) Create stream\nI0525 11:17:50.851073    1531 log.go:172] (0xc000b849a0) (0xc0009ac000) Stream added, broadcasting: 3\nI0525 11:17:50.852087    1531 log.go:172] (0xc000b849a0) Reply frame received for 3\nI0525 11:17:50.852127    1531 log.go:172] (0xc000b849a0) (0xc00021a000) Create stream\nI0525 11:17:50.852141    1531 log.go:172] (0xc000b849a0) (0xc00021a000) Stream added, broadcasting: 5\nI0525 11:17:50.853023    1531 log.go:172] (0xc000b849a0) Reply frame received for 5\nI0525 11:17:50.939778    1531 log.go:172] (0xc000b849a0) Data frame received for 5\nI0525 11:17:50.939801    1531 log.go:172] (0xc00021a000) (5) Data frame handling\nI0525 11:17:50.939810    1531 log.go:172] (0xc00021a000) (5) Data frame sent\nI0525 11:17:50.939817    1531 log.go:172] (0xc000b849a0) Data frame received for 5\nI0525 11:17:50.939823    1531 log.go:172] (0xc00021a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 11:17:50.939852    1531 log.go:172] (0xc000b849a0) Data frame received for 3\nI0525 11:17:50.939876    1531 log.go:172] (0xc0009ac000) (3) Data frame handling\nI0525 11:17:50.939887    1531 log.go:172] (0xc0009ac000) (3) Data frame sent\nI0525 11:17:50.939896    1531 log.go:172] (0xc000b849a0) Data frame received for 3\nI0525 11:17:50.939905    1531 log.go:172] (0xc0009ac000) (3) Data frame handling\nI0525 11:17:50.941084    1531 log.go:172] (0xc000b849a0) Data frame received for 1\nI0525 11:17:50.941240    1531 log.go:172] (0xc000831540) (1) Data frame handling\nI0525 11:17:50.941283    1531 log.go:172] (0xc000831540) (1) Data frame sent\nI0525 11:17:50.941303    1531 log.go:172] (0xc000b849a0) (0xc000831540) Stream removed, broadcasting: 1\nI0525 11:17:50.941453    1531 log.go:172] (0xc000b849a0) Go away received\nI0525 11:17:50.941621    1531 log.go:172] (0xc000b849a0) (0xc000831540) Stream removed, broadcasting: 1\nI0525 11:17:50.941639    1531 log.go:172] (0xc000b849a0) (0xc0009ac000) Stream removed, broadcasting: 3\nI0525 11:17:50.941652    1531 log.go:172] (0xc000b849a0) (0xc00021a000) Stream removed, broadcasting: 5\n"
May 25 11:17:50.947: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 25 11:17:50.947: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 25 11:17:50.947: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5496 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 11:17:51.146: INFO: stderr: "I0525 11:17:51.085476    1554 log.go:172] (0xc000774840) (0xc0008780a0) Create stream\nI0525 11:17:51.085550    1554 log.go:172] (0xc000774840) (0xc0008780a0) Stream added, broadcasting: 1\nI0525 11:17:51.088363    1554 log.go:172] (0xc000774840) Reply frame received for 1\nI0525 11:17:51.088432    1554 log.go:172] (0xc000774840) (0xc0006a52c0) Create stream\nI0525 11:17:51.088451    1554 log.go:172] (0xc000774840) (0xc0006a52c0) Stream added, broadcasting: 3\nI0525 11:17:51.089991    1554 log.go:172] (0xc000774840) Reply frame received for 3\nI0525 11:17:51.090025    1554 log.go:172] (0xc000774840) (0xc0008781e0) Create stream\nI0525 11:17:51.090036    1554 log.go:172] (0xc000774840) (0xc0008781e0) Stream added, broadcasting: 5\nI0525 11:17:51.090820    1554 log.go:172] (0xc000774840) Reply frame received for 5\nI0525 11:17:51.141333    1554 log.go:172] (0xc000774840) Data frame received for 3\nI0525 11:17:51.141379    1554 log.go:172] (0xc0006a52c0) (3) Data frame handling\nI0525 11:17:51.141397    1554 log.go:172] (0xc0006a52c0) (3) Data frame sent\nI0525 11:17:51.141422    1554 log.go:172] (0xc000774840) Data frame received for 3\nI0525 11:17:51.141436    1554 log.go:172] (0xc0006a52c0) (3) Data frame handling\nI0525 11:17:51.141473    1554 log.go:172] (0xc000774840) Data frame received for 5\nI0525 11:17:51.141495    1554 log.go:172] (0xc0008781e0) (5) Data frame handling\nI0525 11:17:51.141519    1554 log.go:172] (0xc0008781e0) (5) Data frame sent\nI0525 11:17:51.141533    1554 log.go:172] (0xc000774840) Data frame received for 5\nI0525 11:17:51.141543    1554 log.go:172] (0xc0008781e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 11:17:51.142631    1554 log.go:172] (0xc000774840) Data frame received for 1\nI0525 11:17:51.142650    1554 log.go:172] (0xc0008780a0) (1) Data frame handling\nI0525 11:17:51.142661    1554 log.go:172] (0xc0008780a0) (1) Data frame sent\nI0525 11:17:51.142669    1554 log.go:172] (0xc000774840) (0xc0008780a0) Stream removed, broadcasting: 1\nI0525 11:17:51.142679    1554 log.go:172] (0xc000774840) Go away received\nI0525 11:17:51.143088    1554 log.go:172] (0xc000774840) (0xc0008780a0) Stream removed, broadcasting: 1\nI0525 11:17:51.143115    1554 log.go:172] (0xc000774840) (0xc0006a52c0) Stream removed, broadcasting: 3\nI0525 11:17:51.143132    1554 log.go:172] (0xc000774840) (0xc0008781e0) Stream removed, broadcasting: 5\n"
May 25 11:17:51.146: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 25 11:17:51.146: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 25 11:17:51.147: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5496 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 11:17:51.364: INFO: stderr: "I0525 11:17:51.282559    1576 log.go:172] (0xc00003b130) (0xc0006697c0) Create stream\nI0525 11:17:51.282631    1576 log.go:172] (0xc00003b130) (0xc0006697c0) Stream added, broadcasting: 1\nI0525 11:17:51.285044    1576 log.go:172] (0xc00003b130) Reply frame received for 1\nI0525 11:17:51.285099    1576 log.go:172] (0xc00003b130) (0xc0004b4b40) Create stream\nI0525 11:17:51.285306    1576 log.go:172] (0xc00003b130) (0xc0004b4b40) Stream added, broadcasting: 3\nI0525 11:17:51.286103    1576 log.go:172] (0xc00003b130) Reply frame received for 3\nI0525 11:17:51.286137    1576 log.go:172] (0xc00003b130) (0xc000669860) Create stream\nI0525 11:17:51.286147    1576 log.go:172] (0xc00003b130) (0xc000669860) Stream added, broadcasting: 5\nI0525 11:17:51.286767    1576 log.go:172] (0xc00003b130) Reply frame received for 5\nI0525 11:17:51.354762    1576 log.go:172] (0xc00003b130) Data frame received for 5\nI0525 11:17:51.354796    1576 log.go:172] (0xc000669860) (5) Data frame handling\nI0525 11:17:51.354818    1576 log.go:172] (0xc000669860) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 11:17:51.355772    1576 log.go:172] (0xc00003b130) Data frame received for 3\nI0525 11:17:51.355807    1576 log.go:172] (0xc0004b4b40) (3) Data frame handling\nI0525 11:17:51.355825    1576 log.go:172] (0xc0004b4b40) (3) Data frame sent\nI0525 11:17:51.355983    1576 log.go:172] (0xc00003b130) Data frame received for 5\nI0525 11:17:51.356013    1576 log.go:172] (0xc000669860) (5) Data frame handling\nI0525 11:17:51.356116    1576 log.go:172] (0xc00003b130) Data frame received for 3\nI0525 11:17:51.356131    1576 log.go:172] (0xc0004b4b40) (3) Data frame handling\nI0525 11:17:51.357707    1576 log.go:172] (0xc00003b130) Data frame received for 1\nI0525 11:17:51.357743    1576 log.go:172] (0xc0006697c0) (1) Data frame handling\nI0525 11:17:51.357764    1576 log.go:172] (0xc0006697c0) (1) Data frame sent\nI0525 11:17:51.357776    1576 log.go:172] (0xc00003b130) (0xc0006697c0) Stream removed, broadcasting: 1\nI0525 11:17:51.357792    1576 log.go:172] (0xc00003b130) Go away received\nI0525 11:17:51.358254    1576 log.go:172] (0xc00003b130) (0xc0006697c0) Stream removed, broadcasting: 1\nI0525 11:17:51.358283    1576 log.go:172] (0xc00003b130) (0xc0004b4b40) Stream removed, broadcasting: 3\nI0525 11:17:51.358305    1576 log.go:172] (0xc00003b130) (0xc000669860) Stream removed, broadcasting: 5\n"
May 25 11:17:51.364: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 25 11:17:51.364: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 25 11:17:51.364: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May 25 11:18:31.381: INFO: Deleting all statefulset in ns statefulset-5496
May 25 11:18:31.384: INFO: Scaling statefulset ss to 0
May 25 11:18:31.393: INFO: Waiting for statefulset status.replicas updated to 0
May 25 11:18:31.396: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:18:31.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5496" for this suite.

• [SLOW TEST:102.970 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":85,"skipped":1451,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:18:31.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:18:42.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7009" for this suite.

• [SLOW TEST:11.227 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":86,"skipped":1454,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:18:42.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:18:42.846: INFO: (0) /api/v1/nodes/kali-worker2/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-7442
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 25 11:18:43.000: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 25 11:18:43.050: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 25 11:18:45.055: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 25 11:18:47.055: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 25 11:18:49.055: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:18:51.055: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:18:53.055: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:18:55.054: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:18:57.055: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:18:59.055: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:19:01.055: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:19:03.055: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:19:05.054: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:19:07.055: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 25 11:19:07.063: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 25 11:19:11.176: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.122 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7442 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 25 11:19:11.176: INFO: >>> kubeConfig: /root/.kube/config
I0525 11:19:11.205326       7 log.go:172] (0xc0029a13f0) (0xc001097f40) Create stream
I0525 11:19:11.205354       7 log.go:172] (0xc0029a13f0) (0xc001097f40) Stream added, broadcasting: 1
I0525 11:19:11.206771       7 log.go:172] (0xc0029a13f0) Reply frame received for 1
I0525 11:19:11.206812       7 log.go:172] (0xc0029a13f0) (0xc0029965a0) Create stream
I0525 11:19:11.206829       7 log.go:172] (0xc0029a13f0) (0xc0029965a0) Stream added, broadcasting: 3
I0525 11:19:11.207907       7 log.go:172] (0xc0029a13f0) Reply frame received for 3
I0525 11:19:11.207935       7 log.go:172] (0xc0029a13f0) (0xc002ab8fa0) Create stream
I0525 11:19:11.207947       7 log.go:172] (0xc0029a13f0) (0xc002ab8fa0) Stream added, broadcasting: 5
I0525 11:19:11.208723       7 log.go:172] (0xc0029a13f0) Reply frame received for 5
I0525 11:19:12.305388       7 log.go:172] (0xc0029a13f0) Data frame received for 5
I0525 11:19:12.305446       7 log.go:172] (0xc002ab8fa0) (5) Data frame handling
I0525 11:19:12.305506       7 log.go:172] (0xc0029a13f0) Data frame received for 3
I0525 11:19:12.305530       7 log.go:172] (0xc0029965a0) (3) Data frame handling
I0525 11:19:12.305567       7 log.go:172] (0xc0029965a0) (3) Data frame sent
I0525 11:19:12.305596       7 log.go:172] (0xc0029a13f0) Data frame received for 3
I0525 11:19:12.305614       7 log.go:172] (0xc0029965a0) (3) Data frame handling
I0525 11:19:12.310815       7 log.go:172] (0xc0029a13f0) Data frame received for 1
I0525 11:19:12.310841       7 log.go:172] (0xc001097f40) (1) Data frame handling
I0525 11:19:12.310850       7 log.go:172] (0xc001097f40) (1) Data frame sent
I0525 11:19:12.310861       7 log.go:172] (0xc0029a13f0) (0xc001097f40) Stream removed, broadcasting: 1
I0525 11:19:12.310873       7 log.go:172] (0xc0029a13f0) Go away received
I0525 11:19:12.310964       7 log.go:172] (0xc0029a13f0) (0xc001097f40) Stream removed, broadcasting: 1
I0525 11:19:12.310977       7 log.go:172] (0xc0029a13f0) (0xc0029965a0) Stream removed, broadcasting: 3
I0525 11:19:12.310983       7 log.go:172] (0xc0029a13f0) (0xc002ab8fa0) Stream removed, broadcasting: 5
May 25 11:19:12.310: INFO: Found all expected endpoints: [netserver-0]
May 25 11:19:12.339: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.124 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7442 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 25 11:19:12.339: INFO: >>> kubeConfig: /root/.kube/config
I0525 11:19:12.372920       7 log.go:172] (0xc00271cb00) (0xc000e6a780) Create stream
I0525 11:19:12.372947       7 log.go:172] (0xc00271cb00) (0xc000e6a780) Stream added, broadcasting: 1
I0525 11:19:12.375011       7 log.go:172] (0xc00271cb00) Reply frame received for 1
I0525 11:19:12.375057       7 log.go:172] (0xc00271cb00) (0xc002996640) Create stream
I0525 11:19:12.375073       7 log.go:172] (0xc00271cb00) (0xc002996640) Stream added, broadcasting: 3
I0525 11:19:12.376005       7 log.go:172] (0xc00271cb00) Reply frame received for 3
I0525 11:19:12.376044       7 log.go:172] (0xc00271cb00) (0xc0029966e0) Create stream
I0525 11:19:12.376072       7 log.go:172] (0xc00271cb00) (0xc0029966e0) Stream added, broadcasting: 5
I0525 11:19:12.377350       7 log.go:172] (0xc00271cb00) Reply frame received for 5
I0525 11:19:13.464983       7 log.go:172] (0xc00271cb00) Data frame received for 3
I0525 11:19:13.465059       7 log.go:172] (0xc002996640) (3) Data frame handling
I0525 11:19:13.465307       7 log.go:172] (0xc002996640) (3) Data frame sent
I0525 11:19:13.465684       7 log.go:172] (0xc00271cb00) Data frame received for 3
I0525 11:19:13.465705       7 log.go:172] (0xc002996640) (3) Data frame handling
I0525 11:19:13.465733       7 log.go:172] (0xc00271cb00) Data frame received for 5
I0525 11:19:13.465746       7 log.go:172] (0xc0029966e0) (5) Data frame handling
I0525 11:19:13.466997       7 log.go:172] (0xc00271cb00) Data frame received for 1
I0525 11:19:13.467009       7 log.go:172] (0xc000e6a780) (1) Data frame handling
I0525 11:19:13.467028       7 log.go:172] (0xc000e6a780) (1) Data frame sent
I0525 11:19:13.467046       7 log.go:172] (0xc00271cb00) (0xc000e6a780) Stream removed, broadcasting: 1
I0525 11:19:13.467063       7 log.go:172] (0xc00271cb00) Go away received
I0525 11:19:13.467165       7 log.go:172] (0xc00271cb00) (0xc000e6a780) Stream removed, broadcasting: 1
I0525 11:19:13.467184       7 log.go:172] (0xc00271cb00) (0xc002996640) Stream removed, broadcasting: 3
I0525 11:19:13.467193       7 log.go:172] (0xc00271cb00) (0xc0029966e0) Stream removed, broadcasting: 5
May 25 11:19:13.467: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:19:13.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7442" for this suite.

• [SLOW TEST:30.555 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1513,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:19:13.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-8411/configmap-test-e863f0c5-e437-49e5-88a2-ff325e2e96d4
STEP: Creating a pod to test consume configMaps
May 25 11:19:13.651: INFO: Waiting up to 5m0s for pod "pod-configmaps-514ff841-4ab7-4003-a98a-58609217a6ca" in namespace "configmap-8411" to be "Succeeded or Failed"
May 25 11:19:13.655: INFO: Pod "pod-configmaps-514ff841-4ab7-4003-a98a-58609217a6ca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.600316ms
May 25 11:19:15.660: INFO: Pod "pod-configmaps-514ff841-4ab7-4003-a98a-58609217a6ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008566872s
May 25 11:19:17.664: INFO: Pod "pod-configmaps-514ff841-4ab7-4003-a98a-58609217a6ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012700214s
STEP: Saw pod success
May 25 11:19:17.664: INFO: Pod "pod-configmaps-514ff841-4ab7-4003-a98a-58609217a6ca" satisfied condition "Succeeded or Failed"
May 25 11:19:17.667: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-514ff841-4ab7-4003-a98a-58609217a6ca container env-test: 
STEP: delete the pod
May 25 11:19:17.724: INFO: Waiting for pod pod-configmaps-514ff841-4ab7-4003-a98a-58609217a6ca to disappear
May 25 11:19:17.782: INFO: Pod pod-configmaps-514ff841-4ab7-4003-a98a-58609217a6ca no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:19:17.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8411" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1581,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:19:17.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 25 11:19:19.772: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 25 11:19:21.782: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726002359, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726002359, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726002360, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726002359, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 11:19:23.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726002359, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726002359, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726002360, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726002359, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 25 11:19:26.823: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:19:26.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9876" for this suite.
STEP: Destroying namespace "webhook-9876-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.184 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":90,"skipped":1603,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:19:26.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
May 25 11:19:27.066: INFO: Waiting up to 5m0s for pod "pod-85608052-bca3-499c-aade-8faeb5a68d2f" in namespace "emptydir-2767" to be "Succeeded or Failed"
May 25 11:19:27.069: INFO: Pod "pod-85608052-bca3-499c-aade-8faeb5a68d2f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.823017ms
May 25 11:19:29.214: INFO: Pod "pod-85608052-bca3-499c-aade-8faeb5a68d2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148824412s
May 25 11:19:31.274: INFO: Pod "pod-85608052-bca3-499c-aade-8faeb5a68d2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.208846777s
STEP: Saw pod success
May 25 11:19:31.274: INFO: Pod "pod-85608052-bca3-499c-aade-8faeb5a68d2f" satisfied condition "Succeeded or Failed"
May 25 11:19:31.277: INFO: Trying to get logs from node kali-worker2 pod pod-85608052-bca3-499c-aade-8faeb5a68d2f container test-container: 
STEP: delete the pod
May 25 11:19:31.412: INFO: Waiting for pod pod-85608052-bca3-499c-aade-8faeb5a68d2f to disappear
May 25 11:19:31.430: INFO: Pod pod-85608052-bca3-499c-aade-8faeb5a68d2f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:19:31.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2767" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1609,"failed":0}

------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:19:31.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching services
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:19:31.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6202" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":92,"skipped":1609,"failed":0}
SSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:19:31.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename limitrange
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a LimitRange
STEP: Setting up watch
STEP: Submitting a LimitRange
May 25 11:19:31.622: INFO: observed the limitRanges list
STEP: Verifying LimitRange creation was observed
STEP: Fetching the LimitRange to ensure it has proper values
May 25 11:19:31.626: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
May 25 11:19:31.626: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with no resource requirements
STEP: Ensuring Pod has resource requirements applied from LimitRange
May 25 11:19:31.682: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
May 25 11:19:31.682: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with partial resource requirements
STEP: Ensuring Pod has merged resource requirements applied from LimitRange
May 25 11:19:31.719: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
May 25 11:19:31.719: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Failing to create a Pod with less than min resources
STEP: Failing to create a Pod with more than max resources
STEP: Updating a LimitRange
STEP: Verifying LimitRange updating is effective
STEP: Creating a Pod with less than former min resources
STEP: Failing to create a Pod with more than max resources
STEP: Deleting a LimitRange
STEP: Verifying the LimitRange was deleted
May 25 11:19:39.147: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:19:39.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-7905" for this suite.

• [SLOW TEST:7.661 seconds]
[sig-scheduling] LimitRange
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":93,"skipped":1612,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:19:39.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May 25 11:19:43.884: INFO: Successfully updated pod "annotationupdate66e8b28d-75b1-4e89-b8f6-f62542ab5203"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:19:45.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4514" for this suite.

• [SLOW TEST:6.758 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1672,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:19:45.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May 25 11:19:46.685: INFO: Waiting up to 5m0s for pod "downward-api-ace94d1b-71f6-492e-8119-1a947a12028b" in namespace "downward-api-3385" to be "Succeeded or Failed"
May 25 11:19:46.772: INFO: Pod "downward-api-ace94d1b-71f6-492e-8119-1a947a12028b": Phase="Pending", Reason="", readiness=false. Elapsed: 87.112446ms
May 25 11:19:48.789: INFO: Pod "downward-api-ace94d1b-71f6-492e-8119-1a947a12028b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103821029s
May 25 11:19:50.813: INFO: Pod "downward-api-ace94d1b-71f6-492e-8119-1a947a12028b": Phase="Running", Reason="", readiness=true. Elapsed: 4.128079717s
May 25 11:19:52.818: INFO: Pod "downward-api-ace94d1b-71f6-492e-8119-1a947a12028b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.132407214s
STEP: Saw pod success
May 25 11:19:52.818: INFO: Pod "downward-api-ace94d1b-71f6-492e-8119-1a947a12028b" satisfied condition "Succeeded or Failed"
May 25 11:19:52.821: INFO: Trying to get logs from node kali-worker2 pod downward-api-ace94d1b-71f6-492e-8119-1a947a12028b container dapi-container: 
STEP: delete the pod
May 25 11:19:52.864: INFO: Waiting for pod downward-api-ace94d1b-71f6-492e-8119-1a947a12028b to disappear
May 25 11:19:52.909: INFO: Pod downward-api-ace94d1b-71f6-492e-8119-1a947a12028b no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:19:52.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3385" for this suite.

• [SLOW TEST:6.957 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":95,"skipped":1702,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:19:52.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May 25 11:19:57.614: INFO: Successfully updated pod "labelsupdate64737434-7407-4731-9573-78a89656a7e2"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:19:59.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2673" for this suite.

• [SLOW TEST:6.750 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1725,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:19:59.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8290.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8290.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8290.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8290.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8290.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8290.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 25 11:20:06.158: INFO: DNS probes using dns-8290/dns-test-d688eab7-989d-4aff-9a48-812d684ee7fe succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:20:06.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8290" for this suite.

• [SLOW TEST:6.751 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":97,"skipped":1738,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:20:06.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:20:22.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9804" for this suite.

• [SLOW TEST:16.554 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":98,"skipped":1750,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:20:22.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-c03d78d8-97c0-4544-9bc3-4df9973c144b in namespace container-probe-8505
May 25 11:20:27.071: INFO: Started pod test-webserver-c03d78d8-97c0-4544-9bc3-4df9973c144b in namespace container-probe-8505
STEP: checking the pod's current state and verifying that restartCount is present
May 25 11:20:27.074: INFO: Initial restart count of pod test-webserver-c03d78d8-97c0-4544-9bc3-4df9973c144b is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:24:27.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8505" for this suite.

• [SLOW TEST:244.992 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":99,"skipped":1761,"failed":0}
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:24:27.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May 25 11:24:28.010: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May 25 11:24:28.107: INFO: Waiting for terminating namespaces to be deleted...
May 25 11:24:28.109: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May 25 11:24:28.126: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 11:24:28.126: INFO: 	Container kube-proxy ready: true, restart count 0
May 25 11:24:28.126: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 11:24:28.126: INFO: 	Container kindnet-cni ready: true, restart count 1
May 25 11:24:28.126: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May 25 11:24:28.141: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 11:24:28.142: INFO: 	Container kindnet-cni ready: true, restart count 0
May 25 11:24:28.142: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 11:24:28.142: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: verifying the node has the label node kali-worker
STEP: verifying the node has the label node kali-worker2
May 25 11:24:28.266: INFO: Pod kindnet-f8plf requesting resource cpu=100m on Node kali-worker
May 25 11:24:28.266: INFO: Pod kindnet-mcdh2 requesting resource cpu=100m on Node kali-worker2
May 25 11:24:28.266: INFO: Pod kube-proxy-mmnb6 requesting resource cpu=0m on Node kali-worker2
May 25 11:24:28.266: INFO: Pod kube-proxy-vrswj requesting resource cpu=0m on Node kali-worker
STEP: Starting Pods to consume most of the cluster CPU.
May 25 11:24:28.266: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker
May 25 11:24:28.273: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2d03e599-b831-48fd-bc04-2f4d7a418a50.161241b5ff7b51d7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6888/filler-pod-2d03e599-b831-48fd-bc04-2f4d7a418a50 to kali-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2d03e599-b831-48fd-bc04-2f4d7a418a50.161241b695e52136], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2d03e599-b831-48fd-bc04-2f4d7a418a50.161241b6d7fa8349], Reason = [Created], Message = [Created container filler-pod-2d03e599-b831-48fd-bc04-2f4d7a418a50]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2d03e599-b831-48fd-bc04-2f4d7a418a50.161241b6e7c4d6e6], Reason = [Started], Message = [Started container filler-pod-2d03e599-b831-48fd-bc04-2f4d7a418a50]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-63f7ccc2-149f-469c-8058-24b018c5a93d.161241b5fc7afae4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6888/filler-pod-63f7ccc2-149f-469c-8058-24b018c5a93d to kali-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-63f7ccc2-149f-469c-8058-24b018c5a93d.161241b65ca97c9b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-63f7ccc2-149f-469c-8058-24b018c5a93d.161241b6b70e4076], Reason = [Created], Message = [Created container filler-pod-63f7ccc2-149f-469c-8058-24b018c5a93d]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-63f7ccc2-149f-469c-8058-24b018c5a93d.161241b6d7fa8398], Reason = [Started], Message = [Started container filler-pod-63f7ccc2-149f-469c-8058-24b018c5a93d]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.161241b766a467d4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.161241b76882b63f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node kali-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node kali-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:24:35.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6888" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:7.729 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":275,"completed":100,"skipped":1770,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:24:35.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:24:35.874: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:24:37.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3201" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":275,"completed":101,"skipped":1782,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:24:37.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
May 25 11:24:37.512: INFO: Waiting up to 5m0s for pod "pod-f84d5066-45bd-4d08-8208-bb3b7b19f71c" in namespace "emptydir-602" to be "Succeeded or Failed"
May 25 11:24:37.528: INFO: Pod "pod-f84d5066-45bd-4d08-8208-bb3b7b19f71c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.698918ms
May 25 11:24:39.648: INFO: Pod "pod-f84d5066-45bd-4d08-8208-bb3b7b19f71c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135624929s
May 25 11:24:41.942: INFO: Pod "pod-f84d5066-45bd-4d08-8208-bb3b7b19f71c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.430030735s
STEP: Saw pod success
May 25 11:24:41.943: INFO: Pod "pod-f84d5066-45bd-4d08-8208-bb3b7b19f71c" satisfied condition "Succeeded or Failed"
May 25 11:24:42.278: INFO: Trying to get logs from node kali-worker pod pod-f84d5066-45bd-4d08-8208-bb3b7b19f71c container test-container: 
STEP: delete the pod
May 25 11:24:43.074: INFO: Waiting for pod pod-f84d5066-45bd-4d08-8208-bb3b7b19f71c to disappear
May 25 11:24:43.111: INFO: Pod pod-f84d5066-45bd-4d08-8208-bb3b7b19f71c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:24:43.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-602" for this suite.

• [SLOW TEST:6.082 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1795,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:24:43.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
May 25 11:24:44.869: INFO: Pod name wrapped-volume-race-1131fd10-c5a5-4b79-bef4-533f6b005691: Found 0 pods out of 5
May 25 11:24:49.878: INFO: Pod name wrapped-volume-race-1131fd10-c5a5-4b79-bef4-533f6b005691: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-1131fd10-c5a5-4b79-bef4-533f6b005691 in namespace emptydir-wrapper-504, will wait for the garbage collector to delete the pods
May 25 11:25:03.963: INFO: Deleting ReplicationController wrapped-volume-race-1131fd10-c5a5-4b79-bef4-533f6b005691 took: 7.218497ms
May 25 11:25:04.263: INFO: Terminating ReplicationController wrapped-volume-race-1131fd10-c5a5-4b79-bef4-533f6b005691 pods took: 300.266261ms
STEP: Creating RC which spawns configmap-volume pods
May 25 11:25:14.048: INFO: Pod name wrapped-volume-race-207467e3-dd3d-4beb-8262-6f0994da067e: Found 0 pods out of 5
May 25 11:25:19.057: INFO: Pod name wrapped-volume-race-207467e3-dd3d-4beb-8262-6f0994da067e: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-207467e3-dd3d-4beb-8262-6f0994da067e in namespace emptydir-wrapper-504, will wait for the garbage collector to delete the pods
May 25 11:25:35.133: INFO: Deleting ReplicationController wrapped-volume-race-207467e3-dd3d-4beb-8262-6f0994da067e took: 5.386869ms
May 25 11:25:35.534: INFO: Terminating ReplicationController wrapped-volume-race-207467e3-dd3d-4beb-8262-6f0994da067e pods took: 400.249847ms
STEP: Creating RC which spawns configmap-volume pods
May 25 11:25:44.540: INFO: Pod name wrapped-volume-race-b33b2199-0a37-4040-89ed-a7285a860b26: Found 0 pods out of 5
May 25 11:25:49.556: INFO: Pod name wrapped-volume-race-b33b2199-0a37-4040-89ed-a7285a860b26: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-b33b2199-0a37-4040-89ed-a7285a860b26 in namespace emptydir-wrapper-504, will wait for the garbage collector to delete the pods
May 25 11:26:05.712: INFO: Deleting ReplicationController wrapped-volume-race-b33b2199-0a37-4040-89ed-a7285a860b26 took: 7.43188ms
May 25 11:26:06.012: INFO: Terminating ReplicationController wrapped-volume-race-b33b2199-0a37-4040-89ed-a7285a860b26 pods took: 300.262487ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:26:14.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-504" for this suite.

• [SLOW TEST:91.402 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":103,"skipped":1799,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:26:14.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:26:14.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
May 25 11:26:16.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8327 create -f -'
May 25 11:26:20.121: INFO: stderr: ""
May 25 11:26:20.121: INFO: stdout: "e2e-test-crd-publish-openapi-412-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
May 25 11:26:20.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8327 delete e2e-test-crd-publish-openapi-412-crds test-cr'
May 25 11:26:20.224: INFO: stderr: ""
May 25 11:26:20.224: INFO: stdout: "e2e-test-crd-publish-openapi-412-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
May 25 11:26:20.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8327 apply -f -'
May 25 11:26:20.615: INFO: stderr: ""
May 25 11:26:20.615: INFO: stdout: "e2e-test-crd-publish-openapi-412-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
May 25 11:26:20.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8327 delete e2e-test-crd-publish-openapi-412-crds test-cr'
May 25 11:26:20.740: INFO: stderr: ""
May 25 11:26:20.740: INFO: stdout: "e2e-test-crd-publish-openapi-412-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
May 25 11:26:20.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-412-crds'
May 25 11:26:21.026: INFO: stderr: ""
May 25 11:26:21.026: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-412-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:26:23.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8327" for this suite.

• [SLOW TEST:9.142 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":104,"skipped":1846,"failed":0}
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:26:23.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:26:24.096: INFO: Create a RollingUpdate DaemonSet
May 25 11:26:24.100: INFO: Check that daemon pods launch on every node of the cluster
May 25 11:26:24.112: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:26:24.117: INFO: Number of nodes with available pods: 0
May 25 11:26:24.117: INFO: Node kali-worker is running more than one daemon pod
May 25 11:26:25.123: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:26:25.127: INFO: Number of nodes with available pods: 0
May 25 11:26:25.127: INFO: Node kali-worker is running more than one daemon pod
May 25 11:26:26.123: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:26:26.128: INFO: Number of nodes with available pods: 0
May 25 11:26:26.128: INFO: Node kali-worker is running more than one daemon pod
May 25 11:26:27.122: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:26:27.126: INFO: Number of nodes with available pods: 0
May 25 11:26:27.126: INFO: Node kali-worker is running more than one daemon pod
May 25 11:26:28.122: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:26:28.125: INFO: Number of nodes with available pods: 0
May 25 11:26:28.125: INFO: Node kali-worker is running more than one daemon pod
May 25 11:26:29.138: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:26:29.140: INFO: Number of nodes with available pods: 2
May 25 11:26:29.140: INFO: Number of running nodes: 2, number of available pods: 2
May 25 11:26:29.140: INFO: Update the DaemonSet to trigger a rollout
May 25 11:26:29.146: INFO: Updating DaemonSet daemon-set
May 25 11:26:34.242: INFO: Roll back the DaemonSet before rollout is complete
May 25 11:26:34.250: INFO: Updating DaemonSet daemon-set
May 25 11:26:34.250: INFO: Make sure DaemonSet rollback is complete
May 25 11:26:34.267: INFO: Wrong image for pod: daemon-set-xsx45. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May 25 11:26:34.267: INFO: Pod daemon-set-xsx45 is not available
May 25 11:26:34.285: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:26:35.289: INFO: Wrong image for pod: daemon-set-xsx45. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May 25 11:26:35.289: INFO: Pod daemon-set-xsx45 is not available
May 25 11:26:35.293: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:26:36.300: INFO: Pod daemon-set-7d6qs is not available
May 25 11:26:36.304: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7904, will wait for the garbage collector to delete the pods
May 25 11:26:36.369: INFO: Deleting DaemonSet.extensions daemon-set took: 6.24972ms
May 25 11:26:36.670: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.270056ms
May 25 11:26:43.974: INFO: Number of nodes with available pods: 0
May 25 11:26:43.974: INFO: Number of running nodes: 0, number of available pods: 0
May 25 11:26:43.980: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7904/daemonsets","resourceVersion":"7173528"},"items":null}

May 25 11:26:44.027: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7904/pods","resourceVersion":"7173528"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:26:44.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7904" for this suite.

• [SLOW TEST:20.081 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":105,"skipped":1847,"failed":0}
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:26:44.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
May 25 11:27:01.885: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 25 11:27:01.961: INFO: Pod pod-with-poststart-exec-hook still exists
May 25 11:27:03.961: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 25 11:27:04.003: INFO: Pod pod-with-poststart-exec-hook still exists
May 25 11:27:05.961: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 25 11:27:06.003: INFO: Pod pod-with-poststart-exec-hook still exists
May 25 11:27:07.961: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 25 11:27:07.982: INFO: Pod pod-with-poststart-exec-hook still exists
May 25 11:27:09.961: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 25 11:27:09.966: INFO: Pod pod-with-poststart-exec-hook still exists
May 25 11:27:11.961: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 25 11:27:11.965: INFO: Pod pod-with-poststart-exec-hook still exists
May 25 11:27:13.961: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 25 11:27:13.965: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:27:13.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2995" for this suite.

• [SLOW TEST:29.929 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":106,"skipped":1848,"failed":0}
SSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:27:13.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:27:20.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-542" for this suite.

• [SLOW TEST:6.330 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":107,"skipped":1851,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:27:20.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:27:20.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-8309
I0525 11:27:20.946058       7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8309, replica count: 1
I0525 11:27:21.996547       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0525 11:27:22.996770       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0525 11:27:23.997046       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0525 11:27:24.997264       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May 25 11:27:25.138: INFO: Created: latency-svc-89f7l
May 25 11:27:25.150: INFO: Got endpoints: latency-svc-89f7l [53.445999ms]
May 25 11:27:25.261: INFO: Created: latency-svc-sx7gz
May 25 11:27:25.276: INFO: Got endpoints: latency-svc-sx7gz [125.575708ms]
May 25 11:27:25.295: INFO: Created: latency-svc-dlglc
May 25 11:27:25.330: INFO: Got endpoints: latency-svc-dlglc [179.37425ms]
May 25 11:27:25.410: INFO: Created: latency-svc-qdk74
May 25 11:27:25.445: INFO: Got endpoints: latency-svc-qdk74 [294.180461ms]
May 25 11:27:25.506: INFO: Created: latency-svc-ps97z
May 25 11:27:25.578: INFO: Got endpoints: latency-svc-ps97z [426.978316ms]
May 25 11:27:25.600: INFO: Created: latency-svc-lmv5n
May 25 11:27:25.617: INFO: Got endpoints: latency-svc-lmv5n [466.465514ms]
May 25 11:27:25.654: INFO: Created: latency-svc-q8g7z
May 25 11:27:25.671: INFO: Got endpoints: latency-svc-q8g7z [520.334844ms]
May 25 11:27:25.740: INFO: Created: latency-svc-znlmk
May 25 11:27:25.743: INFO: Got endpoints: latency-svc-znlmk [592.755856ms]
May 25 11:27:25.822: INFO: Created: latency-svc-gmvpl
May 25 11:27:25.871: INFO: Got endpoints: latency-svc-gmvpl [720.200137ms]
May 25 11:27:25.931: INFO: Created: latency-svc-p5kzb
May 25 11:27:25.969: INFO: Got endpoints: latency-svc-p5kzb [817.531664ms]
May 25 11:27:26.032: INFO: Created: latency-svc-zcj8m
May 25 11:27:26.039: INFO: Got endpoints: latency-svc-zcj8m [887.701581ms]
May 25 11:27:26.100: INFO: Created: latency-svc-f5cjp
May 25 11:27:26.117: INFO: Got endpoints: latency-svc-f5cjp [966.060153ms]
May 25 11:27:26.176: INFO: Created: latency-svc-p4p4t
May 25 11:27:26.183: INFO: Got endpoints: latency-svc-p4p4t [1.032302209s]
May 25 11:27:26.254: INFO: Created: latency-svc-xrtfp
May 25 11:27:26.271: INFO: Got endpoints: latency-svc-xrtfp [1.120133377s]
May 25 11:27:26.338: INFO: Created: latency-svc-cxbzf
May 25 11:27:26.351: INFO: Got endpoints: latency-svc-cxbzf [1.200506131s]
May 25 11:27:26.415: INFO: Created: latency-svc-jp7wd
May 25 11:27:26.436: INFO: Got endpoints: latency-svc-jp7wd [1.285462161s]
May 25 11:27:26.530: INFO: Created: latency-svc-g5rsb
May 25 11:27:26.557: INFO: Got endpoints: latency-svc-g5rsb [1.280919519s]
May 25 11:27:26.602: INFO: Created: latency-svc-mtb2v
May 25 11:27:26.712: INFO: Got endpoints: latency-svc-mtb2v [1.382284228s]
May 25 11:27:26.747: INFO: Created: latency-svc-lszfn
May 25 11:27:26.761: INFO: Got endpoints: latency-svc-lszfn [1.315993209s]
May 25 11:27:26.784: INFO: Created: latency-svc-jw8wf
May 25 11:27:26.803: INFO: Got endpoints: latency-svc-jw8wf [1.22490816s]
May 25 11:27:26.871: INFO: Created: latency-svc-8d9ns
May 25 11:27:26.876: INFO: Got endpoints: latency-svc-8d9ns [1.258365005s]
May 25 11:27:26.959: INFO: Created: latency-svc-f6qbh
May 25 11:27:27.069: INFO: Got endpoints: latency-svc-f6qbh [1.397240196s]
May 25 11:27:27.367: INFO: Created: latency-svc-xgpv5
May 25 11:27:27.543: INFO: Got endpoints: latency-svc-xgpv5 [1.799360754s]
May 25 11:27:27.727: INFO: Created: latency-svc-954wf
May 25 11:27:28.135: INFO: Got endpoints: latency-svc-954wf [2.263832509s]
May 25 11:27:28.412: INFO: Created: latency-svc-wq2bs
May 25 11:27:28.459: INFO: Got endpoints: latency-svc-wq2bs [2.49052951s]
May 25 11:27:28.625: INFO: Created: latency-svc-c4x69
May 25 11:27:28.663: INFO: Got endpoints: latency-svc-c4x69 [2.62432816s]
May 25 11:27:28.744: INFO: Created: latency-svc-zkr6t
May 25 11:27:28.759: INFO: Got endpoints: latency-svc-zkr6t [2.642309187s]
May 25 11:27:28.788: INFO: Created: latency-svc-c78kv
May 25 11:27:28.808: INFO: Got endpoints: latency-svc-c78kv [2.624755966s]
May 25 11:27:28.889: INFO: Created: latency-svc-pdx6f
May 25 11:27:28.893: INFO: Got endpoints: latency-svc-pdx6f [2.621936995s]
May 25 11:27:28.926: INFO: Created: latency-svc-4hxtr
May 25 11:27:28.940: INFO: Got endpoints: latency-svc-4hxtr [2.58904653s]
May 25 11:27:28.974: INFO: Created: latency-svc-nxrgd
May 25 11:27:29.093: INFO: Got endpoints: latency-svc-nxrgd [2.656362954s]
May 25 11:27:29.111: INFO: Created: latency-svc-bf6ss
May 25 11:27:29.147: INFO: Got endpoints: latency-svc-bf6ss [2.589321845s]
May 25 11:27:29.236: INFO: Created: latency-svc-xsng9
May 25 11:27:29.241: INFO: Got endpoints: latency-svc-xsng9 [2.528393241s]
May 25 11:27:29.286: INFO: Created: latency-svc-828k7
May 25 11:27:29.307: INFO: Got endpoints: latency-svc-828k7 [2.546209123s]
May 25 11:27:29.368: INFO: Created: latency-svc-djsq8
May 25 11:27:29.373: INFO: Got endpoints: latency-svc-djsq8 [2.570442083s]
May 25 11:27:29.405: INFO: Created: latency-svc-vgzr4
May 25 11:27:29.423: INFO: Got endpoints: latency-svc-vgzr4 [2.546840654s]
May 25 11:27:29.458: INFO: Created: latency-svc-9fhfd
May 25 11:27:29.524: INFO: Got endpoints: latency-svc-9fhfd [2.455166436s]
May 25 11:27:29.550: INFO: Created: latency-svc-ms82v
May 25 11:27:29.611: INFO: Got endpoints: latency-svc-ms82v [2.06765021s]
May 25 11:27:29.681: INFO: Created: latency-svc-ch9zm
May 25 11:27:29.706: INFO: Got endpoints: latency-svc-ch9zm [1.571301339s]
May 25 11:27:29.735: INFO: Created: latency-svc-7szmh
May 25 11:27:29.754: INFO: Got endpoints: latency-svc-7szmh [1.294545501s]
May 25 11:27:29.875: INFO: Created: latency-svc-2lqww
May 25 11:27:29.880: INFO: Got endpoints: latency-svc-2lqww [1.216578407s]
May 25 11:27:29.977: INFO: Created: latency-svc-j95rt
May 25 11:27:30.012: INFO: Got endpoints: latency-svc-j95rt [1.252512773s]
May 25 11:27:30.372: INFO: Created: latency-svc-krc5q
May 25 11:27:30.396: INFO: Got endpoints: latency-svc-krc5q [1.58839042s]
May 25 11:27:30.571: INFO: Created: latency-svc-7hrf9
May 25 11:27:30.577: INFO: Got endpoints: latency-svc-7hrf9 [1.684329062s]
May 25 11:27:30.647: INFO: Created: latency-svc-h46nj
May 25 11:27:30.698: INFO: Got endpoints: latency-svc-h46nj [1.7577982s]
May 25 11:27:30.955: INFO: Created: latency-svc-zrrpb
May 25 11:27:30.992: INFO: Got endpoints: latency-svc-zrrpb [1.898616642s]
May 25 11:27:31.188: INFO: Created: latency-svc-z4wc8
May 25 11:27:31.231: INFO: Got endpoints: latency-svc-z4wc8 [2.084320362s]
May 25 11:27:31.350: INFO: Created: latency-svc-jcf8h
May 25 11:27:31.373: INFO: Got endpoints: latency-svc-jcf8h [2.132183596s]
May 25 11:27:31.409: INFO: Created: latency-svc-xbrlv
May 25 11:27:31.420: INFO: Got endpoints: latency-svc-xbrlv [2.112417597s]
May 25 11:27:31.523: INFO: Created: latency-svc-g8q7v
May 25 11:27:31.545: INFO: Got endpoints: latency-svc-g8q7v [2.171075808s]
May 25 11:27:31.589: INFO: Created: latency-svc-sjms7
May 25 11:27:31.902: INFO: Got endpoints: latency-svc-sjms7 [2.479386496s]
May 25 11:27:31.907: INFO: Created: latency-svc-wjkdh
May 25 11:27:31.936: INFO: Got endpoints: latency-svc-wjkdh [2.412481393s]
May 25 11:27:32.099: INFO: Created: latency-svc-r6jw4
May 25 11:27:32.108: INFO: Got endpoints: latency-svc-r6jw4 [2.497623004s]
May 25 11:27:32.185: INFO: Created: latency-svc-hd6m5
May 25 11:27:32.267: INFO: Got endpoints: latency-svc-hd6m5 [2.560635192s]
May 25 11:27:32.314: INFO: Created: latency-svc-f5pdx
May 25 11:27:32.596: INFO: Got endpoints: latency-svc-f5pdx [2.84209193s]
May 25 11:27:32.776: INFO: Created: latency-svc-cxls2
May 25 11:27:32.789: INFO: Got endpoints: latency-svc-cxls2 [2.909430105s]
May 25 11:27:32.831: INFO: Created: latency-svc-wrxsk
May 25 11:27:32.842: INFO: Got endpoints: latency-svc-wrxsk [2.830029834s]
May 25 11:27:32.929: INFO: Created: latency-svc-s8mdv
May 25 11:27:32.956: INFO: Got endpoints: latency-svc-s8mdv [2.559809485s]
May 25 11:27:33.074: INFO: Created: latency-svc-llf44
May 25 11:27:33.109: INFO: Got endpoints: latency-svc-llf44 [2.532171923s]
May 25 11:27:33.151: INFO: Created: latency-svc-6qw8k
May 25 11:27:33.216: INFO: Got endpoints: latency-svc-6qw8k [2.517318893s]
May 25 11:27:33.287: INFO: Created: latency-svc-pbj5l
May 25 11:27:33.359: INFO: Got endpoints: latency-svc-pbj5l [2.367282751s]
May 25 11:27:33.426: INFO: Created: latency-svc-ktrh2
May 25 11:27:33.443: INFO: Got endpoints: latency-svc-ktrh2 [2.211852102s]
May 25 11:27:33.507: INFO: Created: latency-svc-bb2z9
May 25 11:27:33.515: INFO: Got endpoints: latency-svc-bb2z9 [2.141987837s]
May 25 11:27:33.553: INFO: Created: latency-svc-rzfnc
May 25 11:27:33.589: INFO: Got endpoints: latency-svc-rzfnc [2.169587631s]
May 25 11:27:33.685: INFO: Created: latency-svc-sfqf6
May 25 11:27:33.716: INFO: Got endpoints: latency-svc-sfqf6 [2.171056551s]
May 25 11:27:33.815: INFO: Created: latency-svc-qmzhp
May 25 11:27:33.835: INFO: Got endpoints: latency-svc-qmzhp [1.932462483s]
May 25 11:27:33.907: INFO: Created: latency-svc-f9gzq
May 25 11:27:33.910: INFO: Got endpoints: latency-svc-f9gzq [1.974035157s]
May 25 11:27:33.961: INFO: Created: latency-svc-hzc65
May 25 11:27:33.979: INFO: Got endpoints: latency-svc-hzc65 [1.870611721s]
May 25 11:27:34.003: INFO: Created: latency-svc-8qmlp
May 25 11:27:34.075: INFO: Got endpoints: latency-svc-8qmlp [1.807753726s]
May 25 11:27:34.097: INFO: Created: latency-svc-c5ssj
May 25 11:27:34.118: INFO: Got endpoints: latency-svc-c5ssj [1.52213955s]
May 25 11:27:34.145: INFO: Created: latency-svc-ntcqz
May 25 11:27:34.212: INFO: Got endpoints: latency-svc-ntcqz [1.422869638s]
May 25 11:27:34.242: INFO: Created: latency-svc-whr4j
May 25 11:27:34.262: INFO: Got endpoints: latency-svc-whr4j [1.42016866s]
May 25 11:27:34.339: INFO: Created: latency-svc-s7m72
May 25 11:27:34.379: INFO: Got endpoints: latency-svc-s7m72 [1.422763153s]
May 25 11:27:34.409: INFO: Created: latency-svc-vwh2p
May 25 11:27:34.419: INFO: Got endpoints: latency-svc-vwh2p [1.30936521s]
May 25 11:27:34.515: INFO: Created: latency-svc-5hkv6
May 25 11:27:34.517: INFO: Got endpoints: latency-svc-5hkv6 [1.301189082s]
May 25 11:27:34.601: INFO: Created: latency-svc-jknnv
May 25 11:27:34.638: INFO: Got endpoints: latency-svc-jknnv [1.278929092s]
May 25 11:27:34.674: INFO: Created: latency-svc-znd7k
May 25 11:27:34.696: INFO: Got endpoints: latency-svc-znd7k [1.252940621s]
May 25 11:27:34.717: INFO: Created: latency-svc-lvr6n
May 25 11:27:34.811: INFO: Got endpoints: latency-svc-lvr6n [1.295687143s]
May 25 11:27:34.859: INFO: Created: latency-svc-dpvqk
May 25 11:27:34.882: INFO: Got endpoints: latency-svc-dpvqk [1.292938369s]
May 25 11:27:34.967: INFO: Created: latency-svc-x87x6
May 25 11:27:35.003: INFO: Got endpoints: latency-svc-x87x6 [1.28669695s]
May 25 11:27:35.045: INFO: Created: latency-svc-7mvvh
May 25 11:27:35.057: INFO: Got endpoints: latency-svc-7mvvh [1.222610516s]
May 25 11:27:35.131: INFO: Created: latency-svc-qv776
May 25 11:27:35.161: INFO: Got endpoints: latency-svc-qv776 [1.250134443s]
May 25 11:27:35.197: INFO: Created: latency-svc-pnz8t
May 25 11:27:35.243: INFO: Got endpoints: latency-svc-pnz8t [1.263679719s]
May 25 11:27:35.260: INFO: Created: latency-svc-k4475
May 25 11:27:35.321: INFO: Got endpoints: latency-svc-k4475 [1.246055222s]
May 25 11:27:35.368: INFO: Created: latency-svc-2x67r
May 25 11:27:35.383: INFO: Got endpoints: latency-svc-2x67r [1.265219861s]
May 25 11:27:35.411: INFO: Created: latency-svc-jghs2
May 25 11:27:35.459: INFO: Got endpoints: latency-svc-jghs2 [1.246766411s]
May 25 11:27:35.535: INFO: Created: latency-svc-xs5cv
May 25 11:27:35.544: INFO: Got endpoints: latency-svc-xs5cv [1.281697544s]
May 25 11:27:35.587: INFO: Created: latency-svc-csmg4
May 25 11:27:35.598: INFO: Got endpoints: latency-svc-csmg4 [1.219147253s]
May 25 11:27:35.685: INFO: Created: latency-svc-pkpzp
May 25 11:27:35.694: INFO: Got endpoints: latency-svc-pkpzp [1.275569296s]
May 25 11:27:35.723: INFO: Created: latency-svc-q79cs
May 25 11:27:35.755: INFO: Got endpoints: latency-svc-q79cs [1.238018731s]
May 25 11:27:35.856: INFO: Created: latency-svc-l2glh
May 25 11:27:35.863: INFO: Got endpoints: latency-svc-l2glh [1.225250338s]
May 25 11:27:35.892: INFO: Created: latency-svc-6gzfg
May 25 11:27:35.912: INFO: Got endpoints: latency-svc-6gzfg [1.215933434s]
May 25 11:27:35.950: INFO: Created: latency-svc-g8tc4
May 25 11:27:36.008: INFO: Got endpoints: latency-svc-g8tc4 [1.197424446s]
May 25 11:27:36.029: INFO: Created: latency-svc-4rf8h
May 25 11:27:36.044: INFO: Got endpoints: latency-svc-4rf8h [1.161844668s]
May 25 11:27:36.096: INFO: Created: latency-svc-s6qsd
May 25 11:27:36.105: INFO: Got endpoints: latency-svc-s6qsd [1.102287195s]
May 25 11:27:36.168: INFO: Created: latency-svc-28h5h
May 25 11:27:36.202: INFO: Got endpoints: latency-svc-28h5h [1.144369266s]
May 25 11:27:36.233: INFO: Created: latency-svc-sbqr9
May 25 11:27:36.279: INFO: Got endpoints: latency-svc-sbqr9 [1.118110642s]
May 25 11:27:36.292: INFO: Created: latency-svc-4mr5r
May 25 11:27:36.310: INFO: Got endpoints: latency-svc-4mr5r [1.067290419s]
May 25 11:27:36.336: INFO: Created: latency-svc-65l9x
May 25 11:27:36.367: INFO: Got endpoints: latency-svc-65l9x [1.045646422s]
May 25 11:27:36.420: INFO: Created: latency-svc-jmxvx
May 25 11:27:36.438: INFO: Got endpoints: latency-svc-jmxvx [1.054180367s]
May 25 11:27:36.466: INFO: Created: latency-svc-lfdt7
May 25 11:27:36.486: INFO: Got endpoints: latency-svc-lfdt7 [1.026640753s]
May 25 11:27:36.553: INFO: Created: latency-svc-w5h85
May 25 11:27:36.580: INFO: Got endpoints: latency-svc-w5h85 [1.036537688s]
May 25 11:27:36.582: INFO: Created: latency-svc-kr82l
May 25 11:27:36.618: INFO: Got endpoints: latency-svc-kr82l [1.019854374s]
May 25 11:27:36.715: INFO: Created: latency-svc-lrr2n
May 25 11:27:36.742: INFO: Got endpoints: latency-svc-lrr2n [1.047664429s]
May 25 11:27:36.743: INFO: Created: latency-svc-xnldg
May 25 11:27:36.757: INFO: Got endpoints: latency-svc-xnldg [1.001921827s]
May 25 11:27:36.810: INFO: Created: latency-svc-bqjpb
May 25 11:27:36.859: INFO: Got endpoints: latency-svc-bqjpb [996.158827ms]
May 25 11:27:36.900: INFO: Created: latency-svc-mr5qc
May 25 11:27:36.938: INFO: Got endpoints: latency-svc-mr5qc [1.026359053s]
May 25 11:27:37.002: INFO: Created: latency-svc-nwt2j
May 25 11:27:37.010: INFO: Got endpoints: latency-svc-nwt2j [1.001813594s]
May 25 11:27:37.060: INFO: Created: latency-svc-87596
May 25 11:27:37.077: INFO: Got endpoints: latency-svc-87596 [1.033014436s]
May 25 11:27:37.152: INFO: Created: latency-svc-jrwzj
May 25 11:27:37.167: INFO: Got endpoints: latency-svc-jrwzj [1.06201332s]
May 25 11:27:37.206: INFO: Created: latency-svc-k6xpc
May 25 11:27:37.221: INFO: Got endpoints: latency-svc-k6xpc [1.019211886s]
May 25 11:27:37.332: INFO: Created: latency-svc-9tshr
May 25 11:27:37.354: INFO: Got endpoints: latency-svc-9tshr [1.075496649s]
May 25 11:27:37.398: INFO: Created: latency-svc-bz654
May 25 11:27:37.421: INFO: Got endpoints: latency-svc-bz654 [1.111332587s]
May 25 11:27:37.496: INFO: Created: latency-svc-6hz7b
May 25 11:27:37.502: INFO: Got endpoints: latency-svc-6hz7b [1.13484807s]
May 25 11:27:37.534: INFO: Created: latency-svc-tl848
May 25 11:27:37.559: INFO: Got endpoints: latency-svc-tl848 [1.12146986s]
May 25 11:27:37.637: INFO: Created: latency-svc-sjh9w
May 25 11:27:37.651: INFO: Got endpoints: latency-svc-sjh9w [1.164844956s]
May 25 11:27:37.726: INFO: Created: latency-svc-5kfqr
May 25 11:27:37.769: INFO: Got endpoints: latency-svc-5kfqr [1.188161342s]
May 25 11:27:37.846: INFO: Created: latency-svc-95z2m
May 25 11:27:37.916: INFO: Got endpoints: latency-svc-95z2m [1.297868894s]
May 25 11:27:37.925: INFO: Created: latency-svc-r86r7
May 25 11:27:37.951: INFO: Got endpoints: latency-svc-r86r7 [1.208189954s]
May 25 11:27:38.063: INFO: Created: latency-svc-jw9lt
May 25 11:27:38.071: INFO: Got endpoints: latency-svc-jw9lt [1.314027976s]
May 25 11:27:38.092: INFO: Created: latency-svc-q2h5l
May 25 11:27:38.101: INFO: Got endpoints: latency-svc-q2h5l [1.241643388s]
May 25 11:27:38.153: INFO: Created: latency-svc-g9jdb
May 25 11:27:38.206: INFO: Got endpoints: latency-svc-g9jdb [1.267684406s]
May 25 11:27:38.226: INFO: Created: latency-svc-62w2f
May 25 11:27:38.240: INFO: Got endpoints: latency-svc-62w2f [1.229705249s]
May 25 11:27:38.290: INFO: Created: latency-svc-2m684
May 25 11:27:38.303: INFO: Got endpoints: latency-svc-2m684 [1.225928383s]
May 25 11:27:38.363: INFO: Created: latency-svc-6pgcl
May 25 11:27:38.387: INFO: Created: latency-svc-fflh9
May 25 11:27:38.389: INFO: Got endpoints: latency-svc-6pgcl [1.222090071s]
May 25 11:27:38.418: INFO: Got endpoints: latency-svc-fflh9 [1.196607244s]
May 25 11:27:38.513: INFO: Created: latency-svc-gslvr
May 25 11:27:38.517: INFO: Got endpoints: latency-svc-gslvr [1.162623765s]
May 25 11:27:38.553: INFO: Created: latency-svc-c9rpv
May 25 11:27:38.583: INFO: Got endpoints: latency-svc-c9rpv [1.161846902s]
May 25 11:27:38.649: INFO: Created: latency-svc-9fpnr
May 25 11:27:38.656: INFO: Got endpoints: latency-svc-9fpnr [1.154418038s]
May 25 11:27:38.705: INFO: Created: latency-svc-zvslt
May 25 11:27:38.723: INFO: Got endpoints: latency-svc-zvslt [1.163711439s]
May 25 11:27:38.747: INFO: Created: latency-svc-p87p4
May 25 11:27:38.787: INFO: Got endpoints: latency-svc-p87p4 [1.13649411s]
May 25 11:27:38.817: INFO: Created: latency-svc-5pp96
May 25 11:27:38.834: INFO: Got endpoints: latency-svc-5pp96 [1.065482344s]
May 25 11:27:38.883: INFO: Created: latency-svc-jrz7j
May 25 11:27:38.925: INFO: Got endpoints: latency-svc-jrz7j [1.008833312s]
May 25 11:27:38.954: INFO: Created: latency-svc-2kvvx
May 25 11:27:38.970: INFO: Got endpoints: latency-svc-2kvvx [1.019731638s]
May 25 11:27:39.011: INFO: Created: latency-svc-6hnzc
May 25 11:27:39.099: INFO: Got endpoints: latency-svc-6hnzc [1.027410183s]
May 25 11:27:39.123: INFO: Created: latency-svc-hhvnj
May 25 11:27:39.139: INFO: Got endpoints: latency-svc-hhvnj [1.037763319s]
May 25 11:27:39.185: INFO: Created: latency-svc-887qv
May 25 11:27:39.193: INFO: Got endpoints: latency-svc-887qv [987.49038ms]
May 25 11:27:39.260: INFO: Created: latency-svc-kkdjg
May 25 11:27:39.264: INFO: Got endpoints: latency-svc-kkdjg [1.02376295s]
May 25 11:27:39.305: INFO: Created: latency-svc-89wdn
May 25 11:27:39.332: INFO: Got endpoints: latency-svc-89wdn [1.028970713s]
May 25 11:27:39.405: INFO: Created: latency-svc-qk97k
May 25 11:27:39.423: INFO: Got endpoints: latency-svc-qk97k [1.033400923s]
May 25 11:27:39.447: INFO: Created: latency-svc-nrdll
May 25 11:27:39.477: INFO: Got endpoints: latency-svc-nrdll [1.059429471s]
May 25 11:27:39.554: INFO: Created: latency-svc-7btfr
May 25 11:27:39.562: INFO: Got endpoints: latency-svc-7btfr [1.045008617s]
May 25 11:27:39.605: INFO: Created: latency-svc-sgl62
May 25 11:27:39.640: INFO: Got endpoints: latency-svc-sgl62 [1.05664369s]
May 25 11:27:39.727: INFO: Created: latency-svc-p4lnt
May 25 11:27:39.755: INFO: Created: latency-svc-rj9zd
May 25 11:27:39.755: INFO: Got endpoints: latency-svc-p4lnt [1.099068597s]
May 25 11:27:39.772: INFO: Got endpoints: latency-svc-rj9zd [1.049389845s]
May 25 11:27:39.798: INFO: Created: latency-svc-4774m
May 25 11:27:39.865: INFO: Got endpoints: latency-svc-4774m [1.077990504s]
May 25 11:27:39.893: INFO: Created: latency-svc-4dnc4
May 25 11:27:39.919: INFO: Got endpoints: latency-svc-4dnc4 [1.084526912s]
May 25 11:27:40.008: INFO: Created: latency-svc-qvkj5
May 25 11:27:40.033: INFO: Got endpoints: latency-svc-qvkj5 [1.107820922s]
May 25 11:27:40.061: INFO: Created: latency-svc-c59b5
May 25 11:27:40.075: INFO: Got endpoints: latency-svc-c59b5 [1.104945134s]
May 25 11:27:40.096: INFO: Created: latency-svc-6tb6h
May 25 11:27:40.147: INFO: Got endpoints: latency-svc-6tb6h [1.048085706s]
May 25 11:27:40.160: INFO: Created: latency-svc-h7w7l
May 25 11:27:40.176: INFO: Got endpoints: latency-svc-h7w7l [1.03676252s]
May 25 11:27:40.246: INFO: Created: latency-svc-4v9rk
May 25 11:27:40.296: INFO: Got endpoints: latency-svc-4v9rk [1.10221998s]
May 25 11:27:40.306: INFO: Created: latency-svc-sh8mb
May 25 11:27:40.327: INFO: Got endpoints: latency-svc-sh8mb [1.062822739s]
May 25 11:27:40.391: INFO: Created: latency-svc-gcgbh
May 25 11:27:40.439: INFO: Got endpoints: latency-svc-gcgbh [1.107058715s]
May 25 11:27:40.455: INFO: Created: latency-svc-r5trh
May 25 11:27:40.470: INFO: Got endpoints: latency-svc-r5trh [1.047790645s]
May 25 11:27:40.503: INFO: Created: latency-svc-26vdt
May 25 11:27:40.534: INFO: Got endpoints: latency-svc-26vdt [1.057147362s]
May 25 11:27:40.624: INFO: Created: latency-svc-wtttg
May 25 11:27:40.639: INFO: Got endpoints: latency-svc-wtttg [1.07713637s]
May 25 11:27:40.660: INFO: Created: latency-svc-b828r
May 25 11:27:40.676: INFO: Got endpoints: latency-svc-b828r [1.035811791s]
May 25 11:27:40.782: INFO: Created: latency-svc-5gpml
May 25 11:27:40.785: INFO: Got endpoints: latency-svc-5gpml [1.030090547s]
May 25 11:27:40.827: INFO: Created: latency-svc-tjvng
May 25 11:27:40.842: INFO: Got endpoints: latency-svc-tjvng [1.069459436s]
May 25 11:27:40.868: INFO: Created: latency-svc-stqpm
May 25 11:27:40.880: INFO: Got endpoints: latency-svc-stqpm [1.015009613s]
May 25 11:27:40.936: INFO: Created: latency-svc-z5xrq
May 25 11:27:40.954: INFO: Got endpoints: latency-svc-z5xrq [1.034818553s]
May 25 11:27:40.984: INFO: Created: latency-svc-p9wxx
May 25 11:27:41.001: INFO: Got endpoints: latency-svc-p9wxx [968.362988ms]
May 25 11:27:41.096: INFO: Created: latency-svc-msm7b
May 25 11:27:41.122: INFO: Got endpoints: latency-svc-msm7b [1.046469521s]
May 25 11:27:41.150: INFO: Created: latency-svc-lc4qm
May 25 11:27:41.164: INFO: Got endpoints: latency-svc-lc4qm [1.017124339s]
May 25 11:27:41.224: INFO: Created: latency-svc-7xwr7
May 25 11:27:41.296: INFO: Got endpoints: latency-svc-7xwr7 [1.120504794s]
May 25 11:27:41.384: INFO: Created: latency-svc-8xkdq
May 25 11:27:41.392: INFO: Got endpoints: latency-svc-8xkdq [1.096596546s]
May 25 11:27:41.497: INFO: Created: latency-svc-rhw69
May 25 11:27:41.525: INFO: Got endpoints: latency-svc-rhw69 [1.19854112s]
May 25 11:27:41.638: INFO: Created: latency-svc-bhkjs
May 25 11:27:41.641: INFO: Got endpoints: latency-svc-bhkjs [1.201620514s]
May 25 11:27:41.666: INFO: Created: latency-svc-gl44z
May 25 11:27:41.676: INFO: Got endpoints: latency-svc-gl44z [1.205273264s]
May 25 11:27:41.830: INFO: Created: latency-svc-49dfs
May 25 11:27:41.833: INFO: Got endpoints: latency-svc-49dfs [1.298517633s]
May 25 11:27:41.864: INFO: Created: latency-svc-59zss
May 25 11:27:41.906: INFO: Got endpoints: latency-svc-59zss [1.266422489s]
May 25 11:27:42.002: INFO: Created: latency-svc-q6g49
May 25 11:27:42.025: INFO: Got endpoints: latency-svc-q6g49 [1.349152969s]
May 25 11:27:42.118: INFO: Created: latency-svc-q2g8c
May 25 11:27:42.146: INFO: Got endpoints: latency-svc-q2g8c [1.360779663s]
May 25 11:27:42.188: INFO: Created: latency-svc-4rkkw
May 25 11:27:42.206: INFO: Got endpoints: latency-svc-4rkkw [1.363988991s]
May 25 11:27:42.268: INFO: Created: latency-svc-fkxvd
May 25 11:27:42.284: INFO: Got endpoints: latency-svc-fkxvd [1.403629963s]
May 25 11:27:42.416: INFO: Created: latency-svc-dch7n
May 25 11:27:42.428: INFO: Got endpoints: latency-svc-dch7n [1.47444072s]
May 25 11:27:42.458: INFO: Created: latency-svc-mnvls
May 25 11:27:42.477: INFO: Got endpoints: latency-svc-mnvls [1.475216706s]
May 25 11:27:42.571: INFO: Created: latency-svc-m2tlf
May 25 11:27:42.609: INFO: Got endpoints: latency-svc-m2tlf [1.487493809s]
May 25 11:27:42.650: INFO: Created: latency-svc-8nppl
May 25 11:27:42.669: INFO: Got endpoints: latency-svc-8nppl [1.505273762s]
May 25 11:27:42.759: INFO: Created: latency-svc-7vhmp
May 25 11:27:42.790: INFO: Got endpoints: latency-svc-7vhmp [1.493202574s]
May 25 11:27:42.868: INFO: Created: latency-svc-qz9vh
May 25 11:27:42.886: INFO: Got endpoints: latency-svc-qz9vh [1.493869263s]
May 25 11:27:43.044: INFO: Created: latency-svc-dwk5l
May 25 11:27:43.054: INFO: Got endpoints: latency-svc-dwk5l [1.528247594s]
May 25 11:27:43.135: INFO: Created: latency-svc-l5m5f
May 25 11:27:43.188: INFO: Got endpoints: latency-svc-l5m5f [1.547147532s]
May 25 11:27:43.219: INFO: Created: latency-svc-h96h4
May 25 11:27:43.234: INFO: Got endpoints: latency-svc-h96h4 [1.558563242s]
May 25 11:27:43.332: INFO: Created: latency-svc-r7trh
May 25 11:27:43.337: INFO: Got endpoints: latency-svc-r7trh [1.504280016s]
May 25 11:27:43.370: INFO: Created: latency-svc-46nf5
May 25 11:27:43.385: INFO: Got endpoints: latency-svc-46nf5 [1.479550764s]
May 25 11:27:43.529: INFO: Created: latency-svc-vmr4p
May 25 11:27:43.550: INFO: Got endpoints: latency-svc-vmr4p [1.52504209s]
May 25 11:27:43.622: INFO: Created: latency-svc-wwmth
May 25 11:27:43.698: INFO: Got endpoints: latency-svc-wwmth [1.551708471s]
May 25 11:27:43.730: INFO: Created: latency-svc-gkx9v
May 25 11:27:43.746: INFO: Got endpoints: latency-svc-gkx9v [1.54028855s]
May 25 11:27:43.796: INFO: Created: latency-svc-2p4vf
May 25 11:27:43.877: INFO: Got endpoints: latency-svc-2p4vf [1.592573884s]
May 25 11:27:43.911: INFO: Created: latency-svc-bxhzx
May 25 11:27:43.921: INFO: Got endpoints: latency-svc-bxhzx [1.493107019s]
May 25 11:27:43.969: INFO: Created: latency-svc-qj5qt
May 25 11:27:44.023: INFO: Got endpoints: latency-svc-qj5qt [1.546701002s]
May 25 11:27:44.066: INFO: Created: latency-svc-w27t4
May 25 11:27:44.083: INFO: Got endpoints: latency-svc-w27t4 [1.47378365s]
May 25 11:27:44.170: INFO: Created: latency-svc-fjtl8
May 25 11:27:44.174: INFO: Got endpoints: latency-svc-fjtl8 [1.50467074s]
May 25 11:27:44.203: INFO: Created: latency-svc-v7hlf
May 25 11:27:44.223: INFO: Got endpoints: latency-svc-v7hlf [1.432865307s]
May 25 11:27:44.320: INFO: Created: latency-svc-m6jn2
May 25 11:27:44.324: INFO: Got endpoints: latency-svc-m6jn2 [1.437995451s]
May 25 11:27:44.349: INFO: Created: latency-svc-thfg6
May 25 11:27:44.367: INFO: Got endpoints: latency-svc-thfg6 [1.313541464s]
May 25 11:27:44.458: INFO: Created: latency-svc-zlfb9
May 25 11:27:44.469: INFO: Got endpoints: latency-svc-zlfb9 [1.280941709s]
May 25 11:27:44.510: INFO: Created: latency-svc-75hlw
May 25 11:27:44.546: INFO: Got endpoints: latency-svc-75hlw [1.311605679s]
May 25 11:27:44.620: INFO: Created: latency-svc-r7887
May 25 11:27:44.649: INFO: Got endpoints: latency-svc-r7887 [1.311487627s]
May 25 11:27:44.649: INFO: Latencies: [125.575708ms 179.37425ms 294.180461ms 426.978316ms 466.465514ms 520.334844ms 592.755856ms 720.200137ms 817.531664ms 887.701581ms 966.060153ms 968.362988ms 987.49038ms 996.158827ms 1.001813594s 1.001921827s 1.008833312s 1.015009613s 1.017124339s 1.019211886s 1.019731638s 1.019854374s 1.02376295s 1.026359053s 1.026640753s 1.027410183s 1.028970713s 1.030090547s 1.032302209s 1.033014436s 1.033400923s 1.034818553s 1.035811791s 1.036537688s 1.03676252s 1.037763319s 1.045008617s 1.045646422s 1.046469521s 1.047664429s 1.047790645s 1.048085706s 1.049389845s 1.054180367s 1.05664369s 1.057147362s 1.059429471s 1.06201332s 1.062822739s 1.065482344s 1.067290419s 1.069459436s 1.075496649s 1.07713637s 1.077990504s 1.084526912s 1.096596546s 1.099068597s 1.10221998s 1.102287195s 1.104945134s 1.107058715s 1.107820922s 1.111332587s 1.118110642s 1.120133377s 1.120504794s 1.12146986s 1.13484807s 1.13649411s 1.144369266s 1.154418038s 1.161844668s 1.161846902s 1.162623765s 1.163711439s 1.164844956s 1.188161342s 1.196607244s 1.197424446s 1.19854112s 1.200506131s 1.201620514s 1.205273264s 1.208189954s 1.215933434s 1.216578407s 1.219147253s 1.222090071s 1.222610516s 1.22490816s 1.225250338s 1.225928383s 1.229705249s 1.238018731s 1.241643388s 1.246055222s 1.246766411s 1.250134443s 1.252512773s 1.252940621s 1.258365005s 1.263679719s 1.265219861s 1.266422489s 1.267684406s 1.275569296s 1.278929092s 1.280919519s 1.280941709s 1.281697544s 1.285462161s 1.28669695s 1.292938369s 1.294545501s 1.295687143s 1.297868894s 1.298517633s 1.301189082s 1.30936521s 1.311487627s 1.311605679s 1.313541464s 1.314027976s 1.315993209s 1.349152969s 1.360779663s 1.363988991s 1.382284228s 1.397240196s 1.403629963s 1.42016866s 1.422763153s 1.422869638s 1.432865307s 1.437995451s 1.47378365s 1.47444072s 1.475216706s 1.479550764s 1.487493809s 1.493107019s 1.493202574s 1.493869263s 1.504280016s 1.50467074s 1.505273762s 1.52213955s 1.52504209s 1.528247594s 1.54028855s 1.546701002s 1.547147532s 1.551708471s 1.558563242s 1.571301339s 1.58839042s 1.592573884s 1.684329062s 1.7577982s 1.799360754s 1.807753726s 1.870611721s 1.898616642s 1.932462483s 1.974035157s 2.06765021s 2.084320362s 2.112417597s 2.132183596s 2.141987837s 2.169587631s 2.171056551s 2.171075808s 2.211852102s 2.263832509s 2.367282751s 2.412481393s 2.455166436s 2.479386496s 2.49052951s 2.497623004s 2.517318893s 2.528393241s 2.532171923s 2.546209123s 2.546840654s 2.559809485s 2.560635192s 2.570442083s 2.58904653s 2.589321845s 2.621936995s 2.62432816s 2.624755966s 2.642309187s 2.656362954s 2.830029834s 2.84209193s 2.909430105s]
May 25 11:27:44.649: INFO: 50 %ile: 1.252940621s
May 25 11:27:44.649: INFO: 90 %ile: 2.49052951s
May 25 11:27:44.649: INFO: 99 %ile: 2.84209193s
May 25 11:27:44.649: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:27:44.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-8309" for this suite.

• [SLOW TEST:24.367 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":275,"completed":108,"skipped":1861,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:27:44.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May 25 11:27:44.751: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May 25 11:27:44.799: INFO: Waiting for terminating namespaces to be deleted...
May 25 11:27:44.827: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May 25 11:27:44.841: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 11:27:44.841: INFO: 	Container kindnet-cni ready: true, restart count 1
May 25 11:27:44.841: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 11:27:44.841: INFO: 	Container kube-proxy ready: true, restart count 0
May 25 11:27:44.841: INFO: svc-latency-rc-mxqrq from svc-latency-8309 started at 2020-05-25 11:27:21 +0000 UTC (1 container statuses recorded)
May 25 11:27:44.841: INFO: 	Container svc-latency-rc ready: true, restart count 0
May 25 11:27:44.841: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May 25 11:27:44.861: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 11:27:44.861: INFO: 	Container kindnet-cni ready: true, restart count 0
May 25 11:27:44.861: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 11:27:44.861: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-2a3bdebc-0bad-440f-949a-70a4dfd8d537 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-2a3bdebc-0bad-440f-949a-70a4dfd8d537 off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-2a3bdebc-0bad-440f-949a-70a4dfd8d537
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:32:55.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-496" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:310.444 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":109,"skipped":1928,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:32:55.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service nodeport-service with the type=NodePort in namespace services-5286
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-5286
STEP: creating replication controller externalsvc in namespace services-5286
I0525 11:32:55.376456       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5286, replica count: 2
I0525 11:32:58.426957       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0525 11:33:01.427123       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
May 25 11:33:01.528: INFO: Creating new exec pod
May 25 11:33:05.628: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5286 execpodppj4j -- /bin/sh -x -c nslookup nodeport-service'
May 25 11:33:05.871: INFO: stderr: "I0525 11:33:05.766702    1714 log.go:172] (0xc0000e0370) (0xc0006a9220) Create stream\nI0525 11:33:05.766763    1714 log.go:172] (0xc0000e0370) (0xc0006a9220) Stream added, broadcasting: 1\nI0525 11:33:05.769983    1714 log.go:172] (0xc0000e0370) Reply frame received for 1\nI0525 11:33:05.770063    1714 log.go:172] (0xc0000e0370) (0xc0006a92c0) Create stream\nI0525 11:33:05.770096    1714 log.go:172] (0xc0000e0370) (0xc0006a92c0) Stream added, broadcasting: 3\nI0525 11:33:05.771295    1714 log.go:172] (0xc0000e0370) Reply frame received for 3\nI0525 11:33:05.771348    1714 log.go:172] (0xc0000e0370) (0xc000b5e000) Create stream\nI0525 11:33:05.771364    1714 log.go:172] (0xc0000e0370) (0xc000b5e000) Stream added, broadcasting: 5\nI0525 11:33:05.772470    1714 log.go:172] (0xc0000e0370) Reply frame received for 5\nI0525 11:33:05.842708    1714 log.go:172] (0xc0000e0370) Data frame received for 5\nI0525 11:33:05.842746    1714 log.go:172] (0xc000b5e000) (5) Data frame handling\nI0525 11:33:05.842774    1714 log.go:172] (0xc000b5e000) (5) Data frame sent\n+ nslookup nodeport-service\nI0525 11:33:05.863678    1714 log.go:172] (0xc0000e0370) Data frame received for 3\nI0525 11:33:05.863706    1714 log.go:172] (0xc0006a92c0) (3) Data frame handling\nI0525 11:33:05.863766    1714 log.go:172] (0xc0006a92c0) (3) Data frame sent\nI0525 11:33:05.864373    1714 log.go:172] (0xc0000e0370) Data frame received for 3\nI0525 11:33:05.864406    1714 log.go:172] (0xc0006a92c0) (3) Data frame handling\nI0525 11:33:05.864454    1714 log.go:172] (0xc0006a92c0) (3) Data frame sent\nI0525 11:33:05.864942    1714 log.go:172] (0xc0000e0370) Data frame received for 5\nI0525 11:33:05.864986    1714 log.go:172] (0xc000b5e000) (5) Data frame handling\nI0525 11:33:05.865538    1714 log.go:172] (0xc0000e0370) Data frame received for 3\nI0525 11:33:05.865559    1714 log.go:172] (0xc0006a92c0) (3) Data frame handling\nI0525 11:33:05.867068    1714 log.go:172] (0xc0000e0370) Data frame received for 1\nI0525 11:33:05.867092    1714 log.go:172] (0xc0006a9220) (1) Data frame handling\nI0525 11:33:05.867106    1714 log.go:172] (0xc0006a9220) (1) Data frame sent\nI0525 11:33:05.867137    1714 log.go:172] (0xc0000e0370) (0xc0006a9220) Stream removed, broadcasting: 1\nI0525 11:33:05.867191    1714 log.go:172] (0xc0000e0370) Go away received\nI0525 11:33:05.867607    1714 log.go:172] (0xc0000e0370) (0xc0006a9220) Stream removed, broadcasting: 1\nI0525 11:33:05.867631    1714 log.go:172] (0xc0000e0370) (0xc0006a92c0) Stream removed, broadcasting: 3\nI0525 11:33:05.867644    1714 log.go:172] (0xc0000e0370) (0xc000b5e000) Stream removed, broadcasting: 5\n"
May 25 11:33:05.871: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5286.svc.cluster.local\tcanonical name = externalsvc.services-5286.svc.cluster.local.\nName:\texternalsvc.services-5286.svc.cluster.local\nAddress: 10.109.120.245\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-5286, will wait for the garbage collector to delete the pods
May 25 11:33:05.932: INFO: Deleting ReplicationController externalsvc took: 6.483763ms
May 25 11:33:06.332: INFO: Terminating ReplicationController externalsvc pods took: 400.228522ms
May 25 11:33:13.838: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:33:13.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5286" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:18.770 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":110,"skipped":1931,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:33:13.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-56f09caf-4d0e-4caa-a21e-fb4c6193c55e
STEP: Creating a pod to test consume configMaps
May 25 11:33:14.061: INFO: Waiting up to 5m0s for pod "pod-configmaps-ee7696a8-3359-4e9e-9cad-8f2448b549a4" in namespace "configmap-6970" to be "Succeeded or Failed"
May 25 11:33:14.078: INFO: Pod "pod-configmaps-ee7696a8-3359-4e9e-9cad-8f2448b549a4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.625108ms
May 25 11:33:16.083: INFO: Pod "pod-configmaps-ee7696a8-3359-4e9e-9cad-8f2448b549a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021702402s
May 25 11:33:18.087: INFO: Pod "pod-configmaps-ee7696a8-3359-4e9e-9cad-8f2448b549a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026205939s
STEP: Saw pod success
May 25 11:33:18.087: INFO: Pod "pod-configmaps-ee7696a8-3359-4e9e-9cad-8f2448b549a4" satisfied condition "Succeeded or Failed"
May 25 11:33:18.090: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-ee7696a8-3359-4e9e-9cad-8f2448b549a4 container configmap-volume-test: 
STEP: delete the pod
May 25 11:33:18.137: INFO: Waiting for pod pod-configmaps-ee7696a8-3359-4e9e-9cad-8f2448b549a4 to disappear
May 25 11:33:18.150: INFO: Pod pod-configmaps-ee7696a8-3359-4e9e-9cad-8f2448b549a4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:33:18.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6970" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":1933,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:33:18.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-r2fpr in namespace proxy-2937
I0525 11:33:18.287799       7 runners.go:190] Created replication controller with name: proxy-service-r2fpr, namespace: proxy-2937, replica count: 1
I0525 11:33:19.338225       7 runners.go:190] proxy-service-r2fpr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0525 11:33:20.339363       7 runners.go:190] proxy-service-r2fpr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0525 11:33:21.339574       7 runners.go:190] proxy-service-r2fpr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0525 11:33:22.339830       7 runners.go:190] proxy-service-r2fpr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0525 11:33:23.340073       7 runners.go:190] proxy-service-r2fpr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0525 11:33:24.340385       7 runners.go:190] proxy-service-r2fpr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0525 11:33:25.340683       7 runners.go:190] proxy-service-r2fpr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0525 11:33:26.340919       7 runners.go:190] proxy-service-r2fpr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0525 11:33:27.341316       7 runners.go:190] proxy-service-r2fpr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0525 11:33:28.341556       7 runners.go:190] proxy-service-r2fpr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0525 11:33:29.341830       7 runners.go:190] proxy-service-r2fpr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0525 11:33:30.342054       7 runners.go:190] proxy-service-r2fpr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0525 11:33:31.342374       7 runners.go:190] proxy-service-r2fpr Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May 25 11:33:31.346: INFO: setup took 13.143344288s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
May 25 11:33:31.353: INFO: (0) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 7.251666ms)
May 25 11:33:31.354: INFO: (0) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj/proxy/: test (200; 7.431227ms)
May 25 11:33:31.357: INFO: (0) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname2/proxy/: bar (200; 10.813176ms)
May 25 11:33:31.358: INFO: (0) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname2/proxy/: bar (200; 11.215386ms)
May 25 11:33:31.358: INFO: (0) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:1080/proxy/: ... (200; 11.409175ms)
May 25 11:33:31.358: INFO: (0) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 11.661241ms)
May 25 11:33:31.360: INFO: (0) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:1080/proxy/: test<... (200; 13.225186ms)
May 25 11:33:31.360: INFO: (0) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 13.787066ms)
May 25 11:33:31.361: INFO: (0) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 14.698222ms)
May 25 11:33:31.361: INFO: (0) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname1/proxy/: foo (200; 14.657081ms)
May 25 11:33:31.361: INFO: (0) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname1/proxy/: foo (200; 14.83316ms)
May 25 11:33:31.362: INFO: (0) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:443/proxy/: test (200; 6.421032ms)
May 25 11:33:31.374: INFO: (1) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 6.510475ms)
May 25 11:33:31.374: INFO: (1) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname2/proxy/: bar (200; 6.901311ms)
May 25 11:33:31.374: INFO: (1) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 7.00459ms)
May 25 11:33:31.374: INFO: (1) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:1080/proxy/: test<... (200; 6.909656ms)
May 25 11:33:31.374: INFO: (1) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:1080/proxy/: ... (200; 7.094357ms)
May 25 11:33:31.375: INFO: (1) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname2/proxy/: bar (200; 7.357287ms)
May 25 11:33:31.375: INFO: (1) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 7.299688ms)
May 25 11:33:31.375: INFO: (1) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname1/proxy/: tls baz (200; 7.341894ms)
May 25 11:33:31.375: INFO: (1) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname1/proxy/: foo (200; 7.414124ms)
May 25 11:33:31.375: INFO: (1) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname1/proxy/: foo (200; 7.439473ms)
May 25 11:33:31.375: INFO: (1) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname2/proxy/: tls qux (200; 7.385298ms)
May 25 11:33:31.375: INFO: (1) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:462/proxy/: tls qux (200; 7.34869ms)
May 25 11:33:31.375: INFO: (1) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 7.454496ms)
May 25 11:33:31.378: INFO: (2) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:1080/proxy/: ... (200; 3.326065ms)
May 25 11:33:31.380: INFO: (2) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 4.855618ms)
May 25 11:33:31.380: INFO: (2) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 4.92096ms)
May 25 11:33:31.380: INFO: (2) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname1/proxy/: foo (200; 5.157602ms)
May 25 11:33:31.380: INFO: (2) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj/proxy/: test (200; 5.288692ms)
May 25 11:33:31.381: INFO: (2) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname1/proxy/: tls baz (200; 5.856803ms)
May 25 11:33:31.381: INFO: (2) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:460/proxy/: tls baz (200; 5.742645ms)
May 25 11:33:31.381: INFO: (2) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 6.528653ms)
May 25 11:33:31.382: INFO: (2) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname2/proxy/: bar (200; 6.553712ms)
May 25 11:33:31.382: INFO: (2) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 6.325137ms)
May 25 11:33:31.382: INFO: (2) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:1080/proxy/: test<... (200; 6.437253ms)
May 25 11:33:31.382: INFO: (2) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname1/proxy/: foo (200; 6.568484ms)
May 25 11:33:31.382: INFO: (2) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname2/proxy/: bar (200; 6.474656ms)
May 25 11:33:31.382: INFO: (2) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:443/proxy/: test<... (200; 3.817647ms)
May 25 11:33:31.387: INFO: (3) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname1/proxy/: foo (200; 4.834285ms)
May 25 11:33:31.388: INFO: (3) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj/proxy/: test (200; 4.829614ms)
May 25 11:33:31.388: INFO: (3) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname2/proxy/: bar (200; 4.930245ms)
May 25 11:33:31.388: INFO: (3) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname1/proxy/: tls baz (200; 4.903445ms)
May 25 11:33:31.388: INFO: (3) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:1080/proxy/: ... (200; 4.964554ms)
May 25 11:33:31.388: INFO: (3) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 5.06561ms)
May 25 11:33:31.388: INFO: (3) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:443/proxy/: test (200; 2.39804ms)
May 25 11:33:31.394: INFO: (4) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:460/proxy/: tls baz (200; 5.321384ms)
May 25 11:33:31.394: INFO: (4) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname2/proxy/: bar (200; 5.609332ms)
May 25 11:33:31.394: INFO: (4) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 5.645356ms)
May 25 11:33:31.394: INFO: (4) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname1/proxy/: foo (200; 5.712367ms)
May 25 11:33:31.394: INFO: (4) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 5.690518ms)
May 25 11:33:31.394: INFO: (4) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:462/proxy/: tls qux (200; 5.891966ms)
May 25 11:33:31.394: INFO: (4) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname2/proxy/: tls qux (200; 5.973557ms)
May 25 11:33:31.395: INFO: (4) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname1/proxy/: tls baz (200; 6.088375ms)
May 25 11:33:31.395: INFO: (4) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 6.242354ms)
May 25 11:33:31.395: INFO: (4) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 6.307613ms)
May 25 11:33:31.395: INFO: (4) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname2/proxy/: bar (200; 6.278921ms)
May 25 11:33:31.395: INFO: (4) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:443/proxy/: test<... (200; 6.380469ms)
May 25 11:33:31.396: INFO: (4) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:1080/proxy/: ... (200; 7.582012ms)
May 25 11:33:31.399: INFO: (5) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:460/proxy/: tls baz (200; 3.049139ms)
May 25 11:33:31.399: INFO: (5) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:443/proxy/: ... (200; 5.110907ms)
May 25 11:33:31.403: INFO: (5) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname1/proxy/: tls baz (200; 6.496758ms)
May 25 11:33:31.403: INFO: (5) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname2/proxy/: bar (200; 6.517794ms)
May 25 11:33:31.403: INFO: (5) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname1/proxy/: foo (200; 6.462177ms)
May 25 11:33:31.403: INFO: (5) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname2/proxy/: bar (200; 6.758121ms)
May 25 11:33:31.403: INFO: (5) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 6.952081ms)
May 25 11:33:31.403: INFO: (5) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 6.967707ms)
May 25 11:33:31.403: INFO: (5) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 7.01669ms)
May 25 11:33:31.403: INFO: (5) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname2/proxy/: tls qux (200; 7.103238ms)
May 25 11:33:31.403: INFO: (5) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj/proxy/: test (200; 7.146966ms)
May 25 11:33:31.403: INFO: (5) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:462/proxy/: tls qux (200; 7.158112ms)
May 25 11:33:31.403: INFO: (5) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:1080/proxy/: test<... (200; 7.189918ms)
May 25 11:33:31.403: INFO: (5) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname1/proxy/: foo (200; 7.31837ms)
May 25 11:33:31.407: INFO: (6) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:462/proxy/: tls qux (200; 3.440653ms)
May 25 11:33:31.408: INFO: (6) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 4.495179ms)
May 25 11:33:31.408: INFO: (6) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj/proxy/: test (200; 4.593362ms)
May 25 11:33:31.409: INFO: (6) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:460/proxy/: tls baz (200; 5.504531ms)
May 25 11:33:31.409: INFO: (6) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:1080/proxy/: test<... (200; 5.704805ms)
May 25 11:33:31.409: INFO: (6) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname1/proxy/: foo (200; 5.622955ms)
May 25 11:33:31.409: INFO: (6) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:1080/proxy/: ... (200; 5.652366ms)
May 25 11:33:31.409: INFO: (6) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 5.616512ms)
May 25 11:33:31.409: INFO: (6) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:443/proxy/: test<... (200; 4.518733ms)
May 25 11:33:31.416: INFO: (7) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:1080/proxy/: ... (200; 4.905193ms)
May 25 11:33:31.416: INFO: (7) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 4.75926ms)
May 25 11:33:31.416: INFO: (7) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 4.953157ms)
May 25 11:33:31.416: INFO: (7) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:462/proxy/: tls qux (200; 4.910452ms)
May 25 11:33:31.416: INFO: (7) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:460/proxy/: tls baz (200; 4.96637ms)
May 25 11:33:31.416: INFO: (7) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj/proxy/: test (200; 4.909702ms)
May 25 11:33:31.416: INFO: (7) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 5.093273ms)
May 25 11:33:31.417: INFO: (7) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:443/proxy/: ... (200; 2.761895ms)
May 25 11:33:31.423: INFO: (8) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:1080/proxy/: test<... (200; 2.930348ms)
May 25 11:33:31.424: INFO: (8) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 3.941493ms)
May 25 11:33:31.424: INFO: (8) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:460/proxy/: tls baz (200; 4.340534ms)
May 25 11:33:31.424: INFO: (8) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:443/proxy/: test (200; 6.272449ms)
May 25 11:33:31.432: INFO: (9) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname2/proxy/: tls qux (200; 5.85403ms)
May 25 11:33:31.432: INFO: (9) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname1/proxy/: tls baz (200; 5.934118ms)
May 25 11:33:31.432: INFO: (9) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname2/proxy/: bar (200; 5.975807ms)
May 25 11:33:31.432: INFO: (9) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname1/proxy/: foo (200; 6.143085ms)
May 25 11:33:31.432: INFO: (9) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname2/proxy/: bar (200; 6.093803ms)
May 25 11:33:31.433: INFO: (9) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 6.92051ms)
May 25 11:33:31.433: INFO: (9) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:1080/proxy/: ... (200; 7.04613ms)
May 25 11:33:31.434: INFO: (9) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:460/proxy/: tls baz (200; 7.243959ms)
May 25 11:33:31.434: INFO: (9) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 7.305559ms)
May 25 11:33:31.434: INFO: (9) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:443/proxy/: test<... (200; 7.688496ms)
May 25 11:33:31.434: INFO: (9) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 7.746183ms)
May 25 11:33:31.434: INFO: (9) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj/proxy/: test (200; 7.75478ms)
May 25 11:33:31.437: INFO: (10) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 3.346448ms)
May 25 11:33:31.438: INFO: (10) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj/proxy/: test (200; 3.854421ms)
May 25 11:33:31.438: INFO: (10) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 4.107814ms)
May 25 11:33:31.439: INFO: (10) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:460/proxy/: tls baz (200; 4.467847ms)
May 25 11:33:31.439: INFO: (10) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname2/proxy/: bar (200; 5.052502ms)
May 25 11:33:31.439: INFO: (10) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname2/proxy/: bar (200; 5.103348ms)
May 25 11:33:31.439: INFO: (10) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname1/proxy/: foo (200; 5.168495ms)
May 25 11:33:31.440: INFO: (10) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 5.45025ms)
May 25 11:33:31.440: INFO: (10) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname1/proxy/: foo (200; 5.425033ms)
May 25 11:33:31.440: INFO: (10) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname1/proxy/: tls baz (200; 5.774659ms)
May 25 11:33:31.440: INFO: (10) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:1080/proxy/: test<... (200; 5.706303ms)
May 25 11:33:31.440: INFO: (10) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:1080/proxy/: ... (200; 5.6908ms)
May 25 11:33:31.440: INFO: (10) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 5.794946ms)
May 25 11:33:31.440: INFO: (10) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname2/proxy/: tls qux (200; 5.805086ms)
May 25 11:33:31.440: INFO: (10) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:443/proxy/: test<... (200; 21.381247ms)
May 25 11:33:31.461: INFO: (11) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:443/proxy/: ... (200; 21.385613ms)
May 25 11:33:31.462: INFO: (11) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 21.395109ms)
May 25 11:33:31.462: INFO: (11) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:460/proxy/: tls baz (200; 21.541351ms)
May 25 11:33:31.462: INFO: (11) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 22.053415ms)
May 25 11:33:31.462: INFO: (11) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj/proxy/: test (200; 22.113538ms)
May 25 11:33:31.462: INFO: (11) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 22.372534ms)
May 25 11:33:31.462: INFO: (11) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:462/proxy/: tls qux (200; 22.357849ms)
May 25 11:33:31.463: INFO: (11) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname2/proxy/: bar (200; 22.509181ms)
May 25 11:33:31.463: INFO: (11) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname2/proxy/: tls qux (200; 23.004987ms)
May 25 11:33:31.463: INFO: (11) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname1/proxy/: foo (200; 23.003102ms)
May 25 11:33:31.463: INFO: (11) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname1/proxy/: foo (200; 22.978359ms)
May 25 11:33:31.463: INFO: (11) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname2/proxy/: bar (200; 23.018949ms)
May 25 11:33:31.463: INFO: (11) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname1/proxy/: tls baz (200; 23.282736ms)
May 25 11:33:31.468: INFO: (12) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 4.898396ms)
May 25 11:33:31.468: INFO: (12) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:1080/proxy/: ... (200; 4.810191ms)
May 25 11:33:31.468: INFO: (12) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:462/proxy/: tls qux (200; 4.91464ms)
May 25 11:33:31.468: INFO: (12) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:443/proxy/: test<... (200; 7.270777ms)
May 25 11:33:31.471: INFO: (12) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:460/proxy/: tls baz (200; 7.302996ms)
May 25 11:33:31.471: INFO: (12) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 7.390892ms)
May 25 11:33:31.471: INFO: (12) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 7.426678ms)
May 25 11:33:31.471: INFO: (12) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj/proxy/: test (200; 7.477139ms)
May 25 11:33:31.472: INFO: (12) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname1/proxy/: foo (200; 8.06353ms)
May 25 11:33:31.472: INFO: (12) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname2/proxy/: bar (200; 8.42194ms)
May 25 11:33:31.472: INFO: (12) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname1/proxy/: foo (200; 8.485129ms)
May 25 11:33:31.472: INFO: (12) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname2/proxy/: tls qux (200; 8.581129ms)
May 25 11:33:31.472: INFO: (12) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname1/proxy/: tls baz (200; 8.543645ms)
May 25 11:33:31.472: INFO: (12) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname2/proxy/: bar (200; 9.041069ms)
May 25 11:33:31.476: INFO: (13) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj/proxy/: test (200; 3.436243ms)
May 25 11:33:31.476: INFO: (13) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:462/proxy/: tls qux (200; 3.642067ms)
May 25 11:33:31.476: INFO: (13) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 3.700992ms)
May 25 11:33:31.476: INFO: (13) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:460/proxy/: tls baz (200; 3.719864ms)
May 25 11:33:31.476: INFO: (13) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:1080/proxy/: test<... (200; 3.702278ms)
May 25 11:33:31.477: INFO: (13) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:1080/proxy/: ... (200; 4.787208ms)
May 25 11:33:31.477: INFO: (13) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 4.722543ms)
May 25 11:33:31.477: INFO: (13) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:443/proxy/: ... (200; 3.922314ms)
May 25 11:33:31.483: INFO: (14) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname2/proxy/: bar (200; 4.233083ms)
May 25 11:33:31.483: INFO: (14) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:462/proxy/: tls qux (200; 4.2389ms)
May 25 11:33:31.483: INFO: (14) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname1/proxy/: foo (200; 4.304793ms)
May 25 11:33:31.484: INFO: (14) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname2/proxy/: bar (200; 4.738113ms)
May 25 11:33:31.484: INFO: (14) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:1080/proxy/: test<... (200; 5.388769ms)
May 25 11:33:31.484: INFO: (14) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 5.348395ms)
May 25 11:33:31.485: INFO: (14) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname1/proxy/: foo (200; 5.647195ms)
May 25 11:33:31.485: INFO: (14) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj/proxy/: test (200; 5.898809ms)
May 25 11:33:31.485: INFO: (14) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname2/proxy/: tls qux (200; 5.865313ms)
May 25 11:33:31.485: INFO: (14) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 5.906649ms)
May 25 11:33:31.485: INFO: (14) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname1/proxy/: tls baz (200; 5.931816ms)
May 25 11:33:31.485: INFO: (14) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:443/proxy/: ... (200; 4.688336ms)
May 25 11:33:31.490: INFO: (15) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 4.735355ms)
May 25 11:33:31.490: INFO: (15) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 4.777113ms)
May 25 11:33:31.491: INFO: (15) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname1/proxy/: tls baz (200; 5.846ms)
May 25 11:33:31.491: INFO: (15) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname2/proxy/: bar (200; 6.068867ms)
May 25 11:33:31.492: INFO: (15) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname2/proxy/: bar (200; 6.159391ms)
May 25 11:33:31.492: INFO: (15) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj/proxy/: test (200; 6.171757ms)
May 25 11:33:31.492: INFO: (15) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname1/proxy/: foo (200; 6.104797ms)
May 25 11:33:31.492: INFO: (15) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:460/proxy/: tls baz (200; 6.270669ms)
May 25 11:33:31.492: INFO: (15) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname2/proxy/: tls qux (200; 6.25549ms)
May 25 11:33:31.492: INFO: (15) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:462/proxy/: tls qux (200; 6.245731ms)
May 25 11:33:31.492: INFO: (15) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname1/proxy/: foo (200; 6.610749ms)
May 25 11:33:31.492: INFO: (15) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 6.564022ms)
May 25 11:33:31.492: INFO: (15) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:443/proxy/: test<... (200; 6.589907ms)
May 25 11:33:31.497: INFO: (16) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 4.721224ms)
May 25 11:33:31.497: INFO: (16) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 4.935325ms)
May 25 11:33:31.497: INFO: (16) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj/proxy/: test (200; 5.078021ms)
May 25 11:33:31.497: INFO: (16) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:1080/proxy/: test<... (200; 5.242502ms)
May 25 11:33:31.498: INFO: (16) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:1080/proxy/: ... (200; 5.382821ms)
May 25 11:33:31.498: INFO: (16) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:460/proxy/: tls baz (200; 5.84181ms)
May 25 11:33:31.498: INFO: (16) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:462/proxy/: tls qux (200; 5.801863ms)
May 25 11:33:31.498: INFO: (16) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 5.840348ms)
May 25 11:33:31.498: INFO: (16) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 5.790356ms)
May 25 11:33:31.498: INFO: (16) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:443/proxy/: test<... (200; 4.896441ms)
May 25 11:33:31.505: INFO: (17) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 4.918989ms)
May 25 11:33:31.505: INFO: (17) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname1/proxy/: tls baz (200; 4.959461ms)
May 25 11:33:31.506: INFO: (17) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname2/proxy/: bar (200; 4.950977ms)
May 25 11:33:31.506: INFO: (17) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:460/proxy/: tls baz (200; 4.951692ms)
May 25 11:33:31.506: INFO: (17) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:462/proxy/: tls qux (200; 5.037968ms)
May 25 11:33:31.506: INFO: (17) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj/proxy/: test (200; 4.822693ms)
May 25 11:33:31.506: INFO: (17) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname1/proxy/: foo (200; 5.026114ms)
May 25 11:33:31.506: INFO: (17) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 5.190378ms)
May 25 11:33:31.506: INFO: (17) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:1080/proxy/: ... (200; 5.839621ms)
May 25 11:33:31.506: INFO: (17) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname1/proxy/: foo (200; 5.809278ms)
May 25 11:33:31.506: INFO: (17) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 5.625379ms)
May 25 11:33:31.506: INFO: (17) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname2/proxy/: tls qux (200; 5.80643ms)
May 25 11:33:31.506: INFO: (17) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname2/proxy/: bar (200; 5.912059ms)
May 25 11:33:31.511: INFO: (18) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname2/proxy/: bar (200; 4.537337ms)
May 25 11:33:31.511: INFO: (18) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname2/proxy/: bar (200; 4.868685ms)
May 25 11:33:31.512: INFO: (18) /api/v1/namespaces/proxy-2937/services/http:proxy-service-r2fpr:portname1/proxy/: foo (200; 5.091922ms)
May 25 11:33:31.512: INFO: (18) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname2/proxy/: tls qux (200; 4.971337ms)
May 25 11:33:31.512: INFO: (18) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:460/proxy/: tls baz (200; 5.091535ms)
May 25 11:33:31.512: INFO: (18) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj/proxy/: test (200; 5.093776ms)
May 25 11:33:31.512: INFO: (18) /api/v1/namespaces/proxy-2937/services/proxy-service-r2fpr:portname1/proxy/: foo (200; 5.061426ms)
May 25 11:33:31.512: INFO: (18) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:1080/proxy/: test<... (200; 5.116128ms)
May 25 11:33:31.512: INFO: (18) /api/v1/namespaces/proxy-2937/services/https:proxy-service-r2fpr:tlsportname1/proxy/: tls baz (200; 5.096495ms)
May 25 11:33:31.512: INFO: (18) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 5.445079ms)
May 25 11:33:31.512: INFO: (18) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 5.48038ms)
May 25 11:33:31.512: INFO: (18) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 5.758569ms)
May 25 11:33:31.512: INFO: (18) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:443/proxy/: ... (200; 5.774119ms)
May 25 11:33:31.515: INFO: (19) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:162/proxy/: bar (200; 2.546408ms)
May 25 11:33:31.516: INFO: (19) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:1080/proxy/: ... (200; 3.269727ms)
May 25 11:33:31.516: INFO: (19) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj/proxy/: test (200; 3.374753ms)
May 25 11:33:31.516: INFO: (19) /api/v1/namespaces/proxy-2937/pods/http:proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 3.403987ms)
May 25 11:33:31.516: INFO: (19) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:160/proxy/: foo (200; 3.424926ms)
May 25 11:33:31.516: INFO: (19) /api/v1/namespaces/proxy-2937/pods/proxy-service-r2fpr-nhsgj:1080/proxy/: test<... (200; 3.428486ms)
May 25 11:33:31.516: INFO: (19) /api/v1/namespaces/proxy-2937/pods/https:proxy-service-r2fpr-nhsgj:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-957e2598-f8cf-4ca7-b16b-2c1c045cf963
STEP: Creating secret with name s-test-opt-upd-7286f05c-a818-4453-9cbd-df8639b5dc41
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-957e2598-f8cf-4ca7-b16b-2c1c045cf963
STEP: Updating secret s-test-opt-upd-7286f05c-a818-4453-9cbd-df8639b5dc41
STEP: Creating secret with name s-test-opt-create-f4a8c548-843c-410c-bed1-32b1c50874d6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:35:05.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7652" for this suite.

• [SLOW TEST:81.360 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":1975,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:35:05.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:35:09.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7532" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":114,"skipped":1984,"failed":0}
S
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:35:09.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name secret-emptykey-test-11f1e71d-abf7-43a0-824d-a45f20d502aa
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:35:09.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8107" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":115,"skipped":1985,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:35:09.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
May 25 11:35:16.075: INFO: Successfully updated pod "adopt-release-24n75"
STEP: Checking that the Job readopts the Pod
May 25 11:35:16.075: INFO: Waiting up to 15m0s for pod "adopt-release-24n75" in namespace "job-8674" to be "adopted"
May 25 11:35:16.103: INFO: Pod "adopt-release-24n75": Phase="Running", Reason="", readiness=true. Elapsed: 28.371362ms
May 25 11:35:18.144: INFO: Pod "adopt-release-24n75": Phase="Running", Reason="", readiness=true. Elapsed: 2.069010419s
May 25 11:35:18.144: INFO: Pod "adopt-release-24n75" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
May 25 11:35:18.667: INFO: Successfully updated pod "adopt-release-24n75"
STEP: Checking that the Job releases the Pod
May 25 11:35:18.667: INFO: Waiting up to 15m0s for pod "adopt-release-24n75" in namespace "job-8674" to be "released"
May 25 11:35:18.691: INFO: Pod "adopt-release-24n75": Phase="Running", Reason="", readiness=true. Elapsed: 23.996027ms
May 25 11:35:20.711: INFO: Pod "adopt-release-24n75": Phase="Running", Reason="", readiness=true. Elapsed: 2.043463735s
May 25 11:35:20.711: INFO: Pod "adopt-release-24n75" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:35:20.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8674" for this suite.

• [SLOW TEST:11.593 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":116,"skipped":1998,"failed":0}
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:35:21.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May 25 11:35:21.246: INFO: Waiting up to 5m0s for pod "downward-api-eeabbcdb-d04e-4cf3-a580-262d4d2801b5" in namespace "downward-api-1254" to be "Succeeded or Failed"
May 25 11:35:21.336: INFO: Pod "downward-api-eeabbcdb-d04e-4cf3-a580-262d4d2801b5": Phase="Pending", Reason="", readiness=false. Elapsed: 90.598116ms
May 25 11:35:23.340: INFO: Pod "downward-api-eeabbcdb-d04e-4cf3-a580-262d4d2801b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09483815s
May 25 11:35:25.345: INFO: Pod "downward-api-eeabbcdb-d04e-4cf3-a580-262d4d2801b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099451148s
STEP: Saw pod success
May 25 11:35:25.345: INFO: Pod "downward-api-eeabbcdb-d04e-4cf3-a580-262d4d2801b5" satisfied condition "Succeeded or Failed"
May 25 11:35:25.348: INFO: Trying to get logs from node kali-worker pod downward-api-eeabbcdb-d04e-4cf3-a580-262d4d2801b5 container dapi-container: 
STEP: delete the pod
May 25 11:35:25.389: INFO: Waiting for pod downward-api-eeabbcdb-d04e-4cf3-a580-262d4d2801b5 to disappear
May 25 11:35:25.393: INFO: Pod downward-api-eeabbcdb-d04e-4cf3-a580-262d4d2801b5 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:35:25.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1254" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":2005,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:35:25.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May 25 11:35:25.605: INFO: Waiting up to 5m0s for pod "downward-api-2e0bd0cd-44da-4a28-a246-7b50fc5f23e7" in namespace "downward-api-9237" to be "Succeeded or Failed"
May 25 11:35:25.621: INFO: Pod "downward-api-2e0bd0cd-44da-4a28-a246-7b50fc5f23e7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.639999ms
May 25 11:35:27.870: INFO: Pod "downward-api-2e0bd0cd-44da-4a28-a246-7b50fc5f23e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.264232882s
May 25 11:35:29.874: INFO: Pod "downward-api-2e0bd0cd-44da-4a28-a246-7b50fc5f23e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.269141278s
STEP: Saw pod success
May 25 11:35:29.875: INFO: Pod "downward-api-2e0bd0cd-44da-4a28-a246-7b50fc5f23e7" satisfied condition "Succeeded or Failed"
May 25 11:35:29.877: INFO: Trying to get logs from node kali-worker2 pod downward-api-2e0bd0cd-44da-4a28-a246-7b50fc5f23e7 container dapi-container: 
STEP: delete the pod
May 25 11:35:30.068: INFO: Waiting for pod downward-api-2e0bd0cd-44da-4a28-a246-7b50fc5f23e7 to disappear
May 25 11:35:30.130: INFO: Pod downward-api-2e0bd0cd-44da-4a28-a246-7b50fc5f23e7 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:35:30.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9237" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":118,"skipped":2024,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:35:30.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-6ee67944-63b3-4fea-bff1-321e660233ca in namespace container-probe-4147
May 25 11:35:34.253: INFO: Started pod busybox-6ee67944-63b3-4fea-bff1-321e660233ca in namespace container-probe-4147
STEP: checking the pod's current state and verifying that restartCount is present
May 25 11:35:34.256: INFO: Initial restart count of pod busybox-6ee67944-63b3-4fea-bff1-321e660233ca is 0
May 25 11:36:24.383: INFO: Restart count of pod container-probe-4147/busybox-6ee67944-63b3-4fea-bff1-321e660233ca is now 1 (50.127000446s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:36:24.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4147" for this suite.

• [SLOW TEST:54.294 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":119,"skipped":2040,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:36:24.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 25 11:36:25.418: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 25 11:36:27.436: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003385, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003385, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003385, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003385, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 25 11:36:30.475: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:36:30.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9139" for this suite.
STEP: Destroying namespace "webhook-9139-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.421 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":120,"skipped":2051,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:36:30.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:36:31.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-566" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":121,"skipped":2053,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:36:31.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-5848
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 25 11:36:31.204: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 25 11:36:31.894: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 25 11:36:34.049: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 25 11:36:35.899: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:36:37.923: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:36:39.899: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:36:41.942: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:36:43.897: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:36:45.905: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 25 11:36:45.911: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 25 11:36:47.915: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 25 11:36:49.936: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 25 11:36:52.014: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 25 11:36:58.281: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.156:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5848 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 25 11:36:58.281: INFO: >>> kubeConfig: /root/.kube/config
I0525 11:36:58.313078       7 log.go:172] (0xc00271c580) (0xc000fb2be0) Create stream
I0525 11:36:58.313295       7 log.go:172] (0xc00271c580) (0xc000fb2be0) Stream added, broadcasting: 1
I0525 11:36:58.315544       7 log.go:172] (0xc00271c580) Reply frame received for 1
I0525 11:36:58.315579       7 log.go:172] (0xc00271c580) (0xc000fb2d20) Create stream
I0525 11:36:58.315593       7 log.go:172] (0xc00271c580) (0xc000fb2d20) Stream added, broadcasting: 3
I0525 11:36:58.316502       7 log.go:172] (0xc00271c580) Reply frame received for 3
I0525 11:36:58.316574       7 log.go:172] (0xc00271c580) (0xc000bc9720) Create stream
I0525 11:36:58.316596       7 log.go:172] (0xc00271c580) (0xc000bc9720) Stream added, broadcasting: 5
I0525 11:36:58.317824       7 log.go:172] (0xc00271c580) Reply frame received for 5
I0525 11:36:58.462243       7 log.go:172] (0xc00271c580) Data frame received for 3
I0525 11:36:58.462286       7 log.go:172] (0xc000fb2d20) (3) Data frame handling
I0525 11:36:58.462306       7 log.go:172] (0xc000fb2d20) (3) Data frame sent
I0525 11:36:58.462328       7 log.go:172] (0xc00271c580) Data frame received for 3
I0525 11:36:58.462348       7 log.go:172] (0xc000fb2d20) (3) Data frame handling
I0525 11:36:58.462518       7 log.go:172] (0xc00271c580) Data frame received for 5
I0525 11:36:58.462552       7 log.go:172] (0xc000bc9720) (5) Data frame handling
I0525 11:36:58.464359       7 log.go:172] (0xc00271c580) Data frame received for 1
I0525 11:36:58.464392       7 log.go:172] (0xc000fb2be0) (1) Data frame handling
I0525 11:36:58.464415       7 log.go:172] (0xc000fb2be0) (1) Data frame sent
I0525 11:36:58.464431       7 log.go:172] (0xc00271c580) (0xc000fb2be0) Stream removed, broadcasting: 1
I0525 11:36:58.464465       7 log.go:172] (0xc00271c580) Go away received
I0525 11:36:58.464576       7 log.go:172] (0xc00271c580) (0xc000fb2be0) Stream removed, broadcasting: 1
I0525 11:36:58.464603       7 log.go:172] (0xc00271c580) (0xc000fb2d20) Stream removed, broadcasting: 3
I0525 11:36:58.464618       7 log.go:172] (0xc00271c580) (0xc000bc9720) Stream removed, broadcasting: 5
May 25 11:36:58.464: INFO: Found all expected endpoints: [netserver-0]
May 25 11:36:58.468: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.148:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5848 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 25 11:36:58.468: INFO: >>> kubeConfig: /root/.kube/config
I0525 11:36:58.496847       7 log.go:172] (0xc002a1a370) (0xc000ff2780) Create stream
I0525 11:36:58.496875       7 log.go:172] (0xc002a1a370) (0xc000ff2780) Stream added, broadcasting: 1
I0525 11:36:58.499258       7 log.go:172] (0xc002a1a370) Reply frame received for 1
I0525 11:36:58.499294       7 log.go:172] (0xc002a1a370) (0xc000fb2fa0) Create stream
I0525 11:36:58.499307       7 log.go:172] (0xc002a1a370) (0xc000fb2fa0) Stream added, broadcasting: 3
I0525 11:36:58.500345       7 log.go:172] (0xc002a1a370) Reply frame received for 3
I0525 11:36:58.500372       7 log.go:172] (0xc002a1a370) (0xc00169c5a0) Create stream
I0525 11:36:58.500380       7 log.go:172] (0xc002a1a370) (0xc00169c5a0) Stream added, broadcasting: 5
I0525 11:36:58.501545       7 log.go:172] (0xc002a1a370) Reply frame received for 5
I0525 11:36:58.568310       7 log.go:172] (0xc002a1a370) Data frame received for 3
I0525 11:36:58.568342       7 log.go:172] (0xc000fb2fa0) (3) Data frame handling
I0525 11:36:58.568355       7 log.go:172] (0xc000fb2fa0) (3) Data frame sent
I0525 11:36:58.568374       7 log.go:172] (0xc002a1a370) Data frame received for 5
I0525 11:36:58.568390       7 log.go:172] (0xc00169c5a0) (5) Data frame handling
I0525 11:36:58.568413       7 log.go:172] (0xc002a1a370) Data frame received for 3
I0525 11:36:58.568421       7 log.go:172] (0xc000fb2fa0) (3) Data frame handling
I0525 11:36:58.569838       7 log.go:172] (0xc002a1a370) Data frame received for 1
I0525 11:36:58.569853       7 log.go:172] (0xc000ff2780) (1) Data frame handling
I0525 11:36:58.569869       7 log.go:172] (0xc000ff2780) (1) Data frame sent
I0525 11:36:58.569889       7 log.go:172] (0xc002a1a370) (0xc000ff2780) Stream removed, broadcasting: 1
I0525 11:36:58.569910       7 log.go:172] (0xc002a1a370) Go away received
I0525 11:36:58.569988       7 log.go:172] (0xc002a1a370) (0xc000ff2780) Stream removed, broadcasting: 1
I0525 11:36:58.570008       7 log.go:172] (0xc002a1a370) (0xc000fb2fa0) Stream removed, broadcasting: 3
I0525 11:36:58.570024       7 log.go:172] (0xc002a1a370) (0xc00169c5a0) Stream removed, broadcasting: 5
May 25 11:36:58.570: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:36:58.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5848" for this suite.

• [SLOW TEST:27.442 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":122,"skipped":2078,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:36:58.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override all
May 25 11:36:58.674: INFO: Waiting up to 5m0s for pod "client-containers-c9b78e0a-8f4c-4b48-a622-b21f8c0cd068" in namespace "containers-8304" to be "Succeeded or Failed"
May 25 11:36:58.708: INFO: Pod "client-containers-c9b78e0a-8f4c-4b48-a622-b21f8c0cd068": Phase="Pending", Reason="", readiness=false. Elapsed: 33.933241ms
May 25 11:37:00.712: INFO: Pod "client-containers-c9b78e0a-8f4c-4b48-a622-b21f8c0cd068": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037516098s
May 25 11:37:02.715: INFO: Pod "client-containers-c9b78e0a-8f4c-4b48-a622-b21f8c0cd068": Phase="Running", Reason="", readiness=true. Elapsed: 4.041226914s
May 25 11:37:04.719: INFO: Pod "client-containers-c9b78e0a-8f4c-4b48-a622-b21f8c0cd068": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045080287s
STEP: Saw pod success
May 25 11:37:04.719: INFO: Pod "client-containers-c9b78e0a-8f4c-4b48-a622-b21f8c0cd068" satisfied condition "Succeeded or Failed"
May 25 11:37:04.722: INFO: Trying to get logs from node kali-worker2 pod client-containers-c9b78e0a-8f4c-4b48-a622-b21f8c0cd068 container test-container: 
STEP: delete the pod
May 25 11:37:04.804: INFO: Waiting for pod client-containers-c9b78e0a-8f4c-4b48-a622-b21f8c0cd068 to disappear
May 25 11:37:04.894: INFO: Pod client-containers-c9b78e0a-8f4c-4b48-a622-b21f8c0cd068 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:37:04.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8304" for this suite.

• [SLOW TEST:6.324 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":123,"skipped":2123,"failed":0}
SSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:37:04.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
May 25 11:37:05.478: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-9733 /api/v1/namespaces/watch-9733/configmaps/e2e-watch-test-resource-version a6b48d92-bdd9-4260-9c96-08389f7013c7 7178226 0 2020-05-25 11:37:05 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-25 11:37:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 25 11:37:05.478: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-9733 /api/v1/namespaces/watch-9733/configmaps/e2e-watch-test-resource-version a6b48d92-bdd9-4260-9c96-08389f7013c7 7178227 0 2020-05-25 11:37:05 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-25 11:37:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:37:05.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9733" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":124,"skipped":2126,"failed":0}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:37:05.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 25 11:37:06.678: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 25 11:37:08.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003426, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003426, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003426, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003426, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 25 11:37:11.723: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:37:12.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2374" for this suite.
STEP: Destroying namespace "webhook-2374-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.992 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":125,"skipped":2126,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:37:12.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 25 11:37:12.591: INFO: Waiting up to 5m0s for pod "downwardapi-volume-72fc141c-2752-4f50-8a8c-3d32103ab48d" in namespace "projected-5105" to be "Succeeded or Failed"
May 25 11:37:12.604: INFO: Pod "downwardapi-volume-72fc141c-2752-4f50-8a8c-3d32103ab48d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.131825ms
May 25 11:37:14.624: INFO: Pod "downwardapi-volume-72fc141c-2752-4f50-8a8c-3d32103ab48d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03325082s
May 25 11:37:16.775: INFO: Pod "downwardapi-volume-72fc141c-2752-4f50-8a8c-3d32103ab48d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.183670594s
STEP: Saw pod success
May 25 11:37:16.775: INFO: Pod "downwardapi-volume-72fc141c-2752-4f50-8a8c-3d32103ab48d" satisfied condition "Succeeded or Failed"
May 25 11:37:16.778: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-72fc141c-2752-4f50-8a8c-3d32103ab48d container client-container: 
STEP: delete the pod
May 25 11:37:16.829: INFO: Waiting for pod downwardapi-volume-72fc141c-2752-4f50-8a8c-3d32103ab48d to disappear
May 25 11:37:16.840: INFO: Pod downwardapi-volume-72fc141c-2752-4f50-8a8c-3d32103ab48d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:37:16.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5105" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":126,"skipped":2133,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:37:16.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:37:30.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8299" for this suite.

• [SLOW TEST:13.291 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":127,"skipped":2218,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:37:30.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
May 25 11:37:30.266: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6250 /api/v1/namespaces/watch-6250/configmaps/e2e-watch-test-configmap-a 413ead21-5e2a-460a-afb4-cb4113a91960 7178439 0 2020-05-25 11:37:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-25 11:37:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May 25 11:37:30.267: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6250 /api/v1/namespaces/watch-6250/configmaps/e2e-watch-test-configmap-a 413ead21-5e2a-460a-afb4-cb4113a91960 7178439 0 2020-05-25 11:37:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-25 11:37:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
May 25 11:37:40.276: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6250 /api/v1/namespaces/watch-6250/configmaps/e2e-watch-test-configmap-a 413ead21-5e2a-460a-afb4-cb4113a91960 7178479 0 2020-05-25 11:37:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-25 11:37:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
May 25 11:37:40.276: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6250 /api/v1/namespaces/watch-6250/configmaps/e2e-watch-test-configmap-a 413ead21-5e2a-460a-afb4-cb4113a91960 7178479 0 2020-05-25 11:37:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-25 11:37:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
May 25 11:37:50.286: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6250 /api/v1/namespaces/watch-6250/configmaps/e2e-watch-test-configmap-a 413ead21-5e2a-460a-afb4-cb4113a91960 7178509 0 2020-05-25 11:37:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-25 11:37:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 25 11:37:50.287: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6250 /api/v1/namespaces/watch-6250/configmaps/e2e-watch-test-configmap-a 413ead21-5e2a-460a-afb4-cb4113a91960 7178509 0 2020-05-25 11:37:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-25 11:37:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
May 25 11:38:00.293: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6250 /api/v1/namespaces/watch-6250/configmaps/e2e-watch-test-configmap-a 413ead21-5e2a-460a-afb4-cb4113a91960 7178536 0 2020-05-25 11:37:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-25 11:37:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 25 11:38:00.294: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6250 /api/v1/namespaces/watch-6250/configmaps/e2e-watch-test-configmap-a 413ead21-5e2a-460a-afb4-cb4113a91960 7178536 0 2020-05-25 11:37:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-25 11:37:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
May 25 11:38:10.302: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6250 /api/v1/namespaces/watch-6250/configmaps/e2e-watch-test-configmap-b c1edec61-1684-4197-abfb-dfb8567022c4 7178566 0 2020-05-25 11:38:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-25 11:38:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May 25 11:38:10.302: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6250 /api/v1/namespaces/watch-6250/configmaps/e2e-watch-test-configmap-b c1edec61-1684-4197-abfb-dfb8567022c4 7178566 0 2020-05-25 11:38:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-25 11:38:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
May 25 11:38:20.310: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6250 /api/v1/namespaces/watch-6250/configmaps/e2e-watch-test-configmap-b c1edec61-1684-4197-abfb-dfb8567022c4 7178596 0 2020-05-25 11:38:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-25 11:38:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May 25 11:38:20.310: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6250 /api/v1/namespaces/watch-6250/configmaps/e2e-watch-test-configmap-b c1edec61-1684-4197-abfb-dfb8567022c4 7178596 0 2020-05-25 11:38:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-25 11:38:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:38:30.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6250" for this suite.

• [SLOW TEST:60.184 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":128,"skipped":2248,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:38:30.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 25 11:38:31.247: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 25 11:38:33.308: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003511, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003511, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003511, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003511, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 11:38:35.313: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003511, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003511, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003511, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003511, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 25 11:38:38.353: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
May 25 11:38:38.503: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:38:38.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9705" for this suite.
STEP: Destroying namespace "webhook-9705-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.849 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":129,"skipped":2264,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:38:39.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-760
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating statefulset ss in namespace statefulset-760
May 25 11:38:39.295: INFO: Found 0 stateful pods, waiting for 1
May 25 11:38:49.301: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May 25 11:38:49.330: INFO: Deleting all statefulset in ns statefulset-760
May 25 11:38:49.380: INFO: Scaling statefulset ss to 0
May 25 11:39:09.718: INFO: Waiting for statefulset status.replicas updated to 0
May 25 11:39:09.721: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:39:10.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-760" for this suite.

• [SLOW TEST:30.988 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":130,"skipped":2274,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:39:10.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8265.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8265.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8265.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8265.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8265.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8265.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 25 11:39:16.767: INFO: DNS probes using dns-8265/dns-test-98995d79-ed35-46e9-88f0-eba47b93f62e succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:39:16.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8265" for this suite.

• [SLOW TEST:6.708 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":131,"skipped":2275,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:39:16.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 25 11:39:17.047: INFO: Waiting up to 5m0s for pod "downwardapi-volume-668754e8-2c61-4fa7-94c6-c12413aa80ac" in namespace "projected-732" to be "Succeeded or Failed"
May 25 11:39:17.109: INFO: Pod "downwardapi-volume-668754e8-2c61-4fa7-94c6-c12413aa80ac": Phase="Pending", Reason="", readiness=false. Elapsed: 61.736392ms
May 25 11:39:19.251: INFO: Pod "downwardapi-volume-668754e8-2c61-4fa7-94c6-c12413aa80ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203525645s
May 25 11:39:21.255: INFO: Pod "downwardapi-volume-668754e8-2c61-4fa7-94c6-c12413aa80ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207470137s
May 25 11:39:23.260: INFO: Pod "downwardapi-volume-668754e8-2c61-4fa7-94c6-c12413aa80ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.212120084s
STEP: Saw pod success
May 25 11:39:23.260: INFO: Pod "downwardapi-volume-668754e8-2c61-4fa7-94c6-c12413aa80ac" satisfied condition "Succeeded or Failed"
May 25 11:39:23.263: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-668754e8-2c61-4fa7-94c6-c12413aa80ac container client-container: 
STEP: delete the pod
May 25 11:39:23.301: INFO: Waiting for pod downwardapi-volume-668754e8-2c61-4fa7-94c6-c12413aa80ac to disappear
May 25 11:39:23.318: INFO: Pod downwardapi-volume-668754e8-2c61-4fa7-94c6-c12413aa80ac no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:39:23.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-732" for this suite.

• [SLOW TEST:6.454 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":132,"skipped":2301,"failed":0}
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:39:23.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:39:23.428: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
May 25 11:39:25.484: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:39:26.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7614" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":133,"skipped":2306,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:39:26.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
May 25 11:39:27.259: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2080'
May 25 11:39:35.375: INFO: stderr: ""
May 25 11:39:35.375: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
May 25 11:39:40.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2080 -o json'
May 25 11:39:40.516: INFO: stderr: ""
May 25 11:39:40.516: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-05-25T11:39:35Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"managedFields\": [\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:labels\": {\n                            \".\": {},\n                            \"f:run\": {}\n                        }\n                    },\n                    \"f:spec\": {\n                        \"f:containers\": {\n                            \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n                                \".\": {},\n                                \"f:image\": {},\n                                \"f:imagePullPolicy\": {},\n                                \"f:name\": {},\n                                \"f:resources\": {},\n                                \"f:terminationMessagePath\": {},\n                                \"f:terminationMessagePolicy\": {}\n                            }\n                        },\n                        \"f:dnsPolicy\": {},\n                        \"f:enableServiceLinks\": {},\n                        \"f:restartPolicy\": {},\n                        \"f:schedulerName\": {},\n                        \"f:securityContext\": {},\n                        \"f:terminationGracePeriodSeconds\": {}\n                    }\n                },\n                \"manager\": \"kubectl\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-05-25T11:39:35Z\"\n            },\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:status\": {\n                        \"f:conditions\": {\n                            \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            }\n                        },\n                        \"f:containerStatuses\": {},\n                        \"f:hostIP\": {},\n                        \"f:phase\": {},\n                        \"f:podIP\": {},\n                        \"f:podIPs\": {\n                            \".\": {},\n                            \"k:{\\\"ip\\\":\\\"10.244.2.164\\\"}\": {\n                                \".\": {},\n                                \"f:ip\": {}\n                            }\n                        },\n                        \"f:startTime\": {}\n                    }\n                },\n                \"manager\": \"kubelet\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-05-25T11:39:38Z\"\n            }\n        ],\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-2080\",\n        \"resourceVersion\": \"7179115\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-2080/pods/e2e-test-httpd-pod\",\n        \"uid\": \"51974e75-99a0-4800-afc6-181cfe550e27\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-jfvgc\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"kali-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-jfvgc\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-jfvgc\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-05-25T11:39:35Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-05-25T11:39:38Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-05-25T11:39:38Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-05-25T11:39:35Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://f362877b75a28d8407b1001e7ba80609daa0149088476c58c34527220e1967f1\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-05-25T11:39:38Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.17.0.15\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.164\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.2.164\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-05-25T11:39:35Z\"\n    }\n}\n"
STEP: replace the image in the pod
May 25 11:39:40.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2080'
May 25 11:39:40.983: INFO: stderr: ""
May 25 11:39:40.983: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
May 25 11:39:41.049: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2080'
May 25 11:39:53.727: INFO: stderr: ""
May 25 11:39:53.727: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:39:53.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2080" for this suite.

• [SLOW TEST:27.127 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":275,"completed":134,"skipped":2350,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:39:53.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:39:53.833: INFO: Waiting up to 5m0s for pod "busybox-user-65534-a3c26ee2-46dd-43a3-aa6d-22711b39cf66" in namespace "security-context-test-2477" to be "Succeeded or Failed"
May 25 11:39:53.839: INFO: Pod "busybox-user-65534-a3c26ee2-46dd-43a3-aa6d-22711b39cf66": Phase="Pending", Reason="", readiness=false. Elapsed: 6.287037ms
May 25 11:39:55.843: INFO: Pod "busybox-user-65534-a3c26ee2-46dd-43a3-aa6d-22711b39cf66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010555112s
May 25 11:39:57.848: INFO: Pod "busybox-user-65534-a3c26ee2-46dd-43a3-aa6d-22711b39cf66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015152416s
May 25 11:39:57.848: INFO: Pod "busybox-user-65534-a3c26ee2-46dd-43a3-aa6d-22711b39cf66" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:39:57.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2477" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":135,"skipped":2365,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:39:57.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 25 11:39:59.674: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 25 11:40:01.990: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003599, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003599, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003599, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003599, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 11:40:03.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003599, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003599, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003599, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003599, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 25 11:40:07.024: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:40:19.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-474" for this suite.
STEP: Destroying namespace "webhook-474-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:21.578 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":136,"skipped":2411,"failed":0}
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:40:19.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
May 25 11:40:19.477: INFO: namespace kubectl-3207
May 25 11:40:19.477: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3207'
May 25 11:40:19.759: INFO: stderr: ""
May 25 11:40:19.759: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
May 25 11:40:20.763: INFO: Selector matched 1 pods for map[app:agnhost]
May 25 11:40:20.763: INFO: Found 0 / 1
May 25 11:40:21.763: INFO: Selector matched 1 pods for map[app:agnhost]
May 25 11:40:21.763: INFO: Found 0 / 1
May 25 11:40:22.763: INFO: Selector matched 1 pods for map[app:agnhost]
May 25 11:40:22.763: INFO: Found 0 / 1
May 25 11:40:23.763: INFO: Selector matched 1 pods for map[app:agnhost]
May 25 11:40:23.763: INFO: Found 1 / 1
May 25 11:40:23.763: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
May 25 11:40:23.766: INFO: Selector matched 1 pods for map[app:agnhost]
May 25 11:40:23.766: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May 25 11:40:23.766: INFO: wait on agnhost-master startup in kubectl-3207 
May 25 11:40:23.766: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs agnhost-master-22wbj agnhost-master --namespace=kubectl-3207'
May 25 11:40:23.898: INFO: stderr: ""
May 25 11:40:23.898: INFO: stdout: "Paused\n"
STEP: exposing RC
May 25 11:40:23.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3207'
May 25 11:40:24.054: INFO: stderr: ""
May 25 11:40:24.054: INFO: stdout: "service/rm2 exposed\n"
May 25 11:40:24.147: INFO: Service rm2 in namespace kubectl-3207 found.
STEP: exposing service
May 25 11:40:26.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3207'
May 25 11:40:26.335: INFO: stderr: ""
May 25 11:40:26.335: INFO: stdout: "service/rm3 exposed\n"
May 25 11:40:26.351: INFO: Service rm3 in namespace kubectl-3207 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:40:28.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3207" for this suite.

• [SLOW TEST:8.933 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":275,"completed":137,"skipped":2411,"failed":0}
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:40:28.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:40:28.556: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-2ab3ff8c-7550-4682-aeec-b7de73ab68a8" in namespace "security-context-test-8332" to be "Succeeded or Failed"
May 25 11:40:28.559: INFO: Pod "busybox-privileged-false-2ab3ff8c-7550-4682-aeec-b7de73ab68a8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.473842ms
May 25 11:40:30.944: INFO: Pod "busybox-privileged-false-2ab3ff8c-7550-4682-aeec-b7de73ab68a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38775544s
May 25 11:40:32.998: INFO: Pod "busybox-privileged-false-2ab3ff8c-7550-4682-aeec-b7de73ab68a8": Phase="Running", Reason="", readiness=true. Elapsed: 4.442113184s
May 25 11:40:35.033: INFO: Pod "busybox-privileged-false-2ab3ff8c-7550-4682-aeec-b7de73ab68a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.477053323s
May 25 11:40:35.033: INFO: Pod "busybox-privileged-false-2ab3ff8c-7550-4682-aeec-b7de73ab68a8" satisfied condition "Succeeded or Failed"
May 25 11:40:35.039: INFO: Got logs for pod "busybox-privileged-false-2ab3ff8c-7550-4682-aeec-b7de73ab68a8": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:40:35.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8332" for this suite.

• [SLOW TEST:6.685 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2411,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:40:35.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May 25 11:40:35.094: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May 25 11:40:35.118: INFO: Waiting for terminating namespaces to be deleted...
May 25 11:40:35.121: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May 25 11:40:35.126: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 11:40:35.126: INFO: 	Container kindnet-cni ready: true, restart count 1
May 25 11:40:35.126: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 11:40:35.126: INFO: 	Container kube-proxy ready: true, restart count 0
May 25 11:40:35.126: INFO: busybox-privileged-false-2ab3ff8c-7550-4682-aeec-b7de73ab68a8 from security-context-test-8332 started at 2020-05-25 11:40:28 +0000 UTC (1 container statuses recorded)
May 25 11:40:35.126: INFO: 	Container busybox-privileged-false-2ab3ff8c-7550-4682-aeec-b7de73ab68a8 ready: false, restart count 0
May 25 11:40:35.126: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May 25 11:40:35.131: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 11:40:35.131: INFO: 	Container kindnet-cni ready: true, restart count 0
May 25 11:40:35.131: INFO: agnhost-master-22wbj from kubectl-3207 started at 2020-05-25 11:40:20 +0000 UTC (1 container statuses recorded)
May 25 11:40:35.131: INFO: 	Container agnhost-master ready: true, restart count 0
May 25 11:40:35.131: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 11:40:35.131: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-da919efb-e299-42c8-a9dd-7c346402d69b 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-da919efb-e299-42c8-a9dd-7c346402d69b off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-da919efb-e299-42c8-a9dd-7c346402d69b
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:40:51.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8131" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:16.513 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":139,"skipped":2429,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:40:51.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May 25 11:40:57.218: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:40:57.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7580" for this suite.

• [SLOW TEST:5.678 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2469,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:40:57.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-2898009f-e6f9-433e-be8f-0c56f16bff43
STEP: Creating a pod to test consume secrets
May 25 11:40:57.383: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fae643e8-4a28-46da-8d25-fa14555e5c1e" in namespace "projected-3579" to be "Succeeded or Failed"
May 25 11:40:57.458: INFO: Pod "pod-projected-secrets-fae643e8-4a28-46da-8d25-fa14555e5c1e": Phase="Pending", Reason="", readiness=false. Elapsed: 74.7427ms
May 25 11:40:59.617: INFO: Pod "pod-projected-secrets-fae643e8-4a28-46da-8d25-fa14555e5c1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233922724s
May 25 11:41:01.672: INFO: Pod "pod-projected-secrets-fae643e8-4a28-46da-8d25-fa14555e5c1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.288853372s
May 25 11:41:03.675: INFO: Pod "pod-projected-secrets-fae643e8-4a28-46da-8d25-fa14555e5c1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.292107617s
STEP: Saw pod success
May 25 11:41:03.675: INFO: Pod "pod-projected-secrets-fae643e8-4a28-46da-8d25-fa14555e5c1e" satisfied condition "Succeeded or Failed"
May 25 11:41:03.678: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-fae643e8-4a28-46da-8d25-fa14555e5c1e container projected-secret-volume-test: 
STEP: delete the pod
May 25 11:41:04.150: INFO: Waiting for pod pod-projected-secrets-fae643e8-4a28-46da-8d25-fa14555e5c1e to disappear
May 25 11:41:04.213: INFO: Pod pod-projected-secrets-fae643e8-4a28-46da-8d25-fa14555e5c1e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:41:04.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3579" for this suite.

• [SLOW TEST:7.173 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2477,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:41:04.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating secret secrets-5142/secret-test-1ce36f6b-c7ae-4b7a-85c9-3046c27caaaf
STEP: Creating a pod to test consume secrets
May 25 11:41:04.880: INFO: Waiting up to 5m0s for pod "pod-configmaps-e65c3d5c-5046-4fa7-8331-c1ddeb2bd896" in namespace "secrets-5142" to be "Succeeded or Failed"
May 25 11:41:04.975: INFO: Pod "pod-configmaps-e65c3d5c-5046-4fa7-8331-c1ddeb2bd896": Phase="Pending", Reason="", readiness=false. Elapsed: 95.009886ms
May 25 11:41:06.979: INFO: Pod "pod-configmaps-e65c3d5c-5046-4fa7-8331-c1ddeb2bd896": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099180459s
May 25 11:41:08.983: INFO: Pod "pod-configmaps-e65c3d5c-5046-4fa7-8331-c1ddeb2bd896": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102779123s
STEP: Saw pod success
May 25 11:41:08.983: INFO: Pod "pod-configmaps-e65c3d5c-5046-4fa7-8331-c1ddeb2bd896" satisfied condition "Succeeded or Failed"
May 25 11:41:08.986: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-e65c3d5c-5046-4fa7-8331-c1ddeb2bd896 container env-test: 
STEP: delete the pod
May 25 11:41:09.044: INFO: Waiting for pod pod-configmaps-e65c3d5c-5046-4fa7-8331-c1ddeb2bd896 to disappear
May 25 11:41:09.071: INFO: Pod pod-configmaps-e65c3d5c-5046-4fa7-8331-c1ddeb2bd896 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:41:09.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5142" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2492,"failed":0}
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:41:09.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override command
May 25 11:41:09.324: INFO: Waiting up to 5m0s for pod "client-containers-227cd36a-98c0-4850-ac2a-3a0b78eca99c" in namespace "containers-2239" to be "Succeeded or Failed"
May 25 11:41:09.352: INFO: Pod "client-containers-227cd36a-98c0-4850-ac2a-3a0b78eca99c": Phase="Pending", Reason="", readiness=false. Elapsed: 28.011431ms
May 25 11:41:11.356: INFO: Pod "client-containers-227cd36a-98c0-4850-ac2a-3a0b78eca99c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031889686s
May 25 11:41:13.359: INFO: Pod "client-containers-227cd36a-98c0-4850-ac2a-3a0b78eca99c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035384889s
May 25 11:41:15.364: INFO: Pod "client-containers-227cd36a-98c0-4850-ac2a-3a0b78eca99c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040207825s
STEP: Saw pod success
May 25 11:41:15.364: INFO: Pod "client-containers-227cd36a-98c0-4850-ac2a-3a0b78eca99c" satisfied condition "Succeeded or Failed"
May 25 11:41:15.367: INFO: Trying to get logs from node kali-worker2 pod client-containers-227cd36a-98c0-4850-ac2a-3a0b78eca99c container test-container: 
STEP: delete the pod
May 25 11:41:15.413: INFO: Waiting for pod client-containers-227cd36a-98c0-4850-ac2a-3a0b78eca99c to disappear
May 25 11:41:15.462: INFO: Pod client-containers-227cd36a-98c0-4850-ac2a-3a0b78eca99c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:41:15.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2239" for this suite.

• [SLOW TEST:6.354 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":143,"skipped":2498,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:41:15.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
May 25 11:41:15.557: INFO: Waiting up to 5m0s for pod "pod-09f4bc85-4866-4eb0-a03c-0d6114d6309c" in namespace "emptydir-6140" to be "Succeeded or Failed"
May 25 11:41:15.563: INFO: Pod "pod-09f4bc85-4866-4eb0-a03c-0d6114d6309c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.268148ms
May 25 11:41:17.568: INFO: Pod "pod-09f4bc85-4866-4eb0-a03c-0d6114d6309c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010807307s
May 25 11:41:19.571: INFO: Pod "pod-09f4bc85-4866-4eb0-a03c-0d6114d6309c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014019331s
STEP: Saw pod success
May 25 11:41:19.571: INFO: Pod "pod-09f4bc85-4866-4eb0-a03c-0d6114d6309c" satisfied condition "Succeeded or Failed"
May 25 11:41:19.572: INFO: Trying to get logs from node kali-worker pod pod-09f4bc85-4866-4eb0-a03c-0d6114d6309c container test-container: 
STEP: delete the pod
May 25 11:41:19.640: INFO: Waiting for pod pod-09f4bc85-4866-4eb0-a03c-0d6114d6309c to disappear
May 25 11:41:19.647: INFO: Pod pod-09f4bc85-4866-4eb0-a03c-0d6114d6309c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:41:19.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6140" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":144,"skipped":2511,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:41:19.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
May 25 11:41:19.729: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the sample API server.
May 25 11:41:20.112: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
May 25 11:41:22.876: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003680, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003680, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003680, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003680, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 11:41:24.906: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003680, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003680, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003680, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003680, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 11:41:27.508: INFO: Waited 622.267624ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:41:28.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-7335" for this suite.

• [SLOW TEST:8.392 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":145,"skipped":2538,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:41:28.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:41:32.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-876" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":146,"skipped":2554,"failed":0}
SSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:41:32.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
STEP: reading a file in the container
May 25 11:41:37.001: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2020 pod-service-account-bf0aea56-1498-4287-9cbc-1a78d8ab1e73 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
May 25 11:41:37.246: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2020 pod-service-account-bf0aea56-1498-4287-9cbc-1a78d8ab1e73 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
May 25 11:41:37.516: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2020 pod-service-account-bf0aea56-1498-4287-9cbc-1a78d8ab1e73 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:41:37.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2020" for this suite.

• [SLOW TEST:5.397 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":275,"completed":147,"skipped":2564,"failed":0}
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:41:37.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
May 25 11:41:46.050: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 25 11:41:46.061: INFO: Pod pod-with-poststart-http-hook still exists
May 25 11:41:48.061: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 25 11:41:48.070: INFO: Pod pod-with-poststart-http-hook still exists
May 25 11:41:50.061: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 25 11:41:50.065: INFO: Pod pod-with-poststart-http-hook still exists
May 25 11:41:52.061: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 25 11:41:52.066: INFO: Pod pod-with-poststart-http-hook still exists
May 25 11:41:54.061: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 25 11:41:54.065: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:41:54.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4878" for this suite.

• [SLOW TEST:16.333 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":148,"skipped":2569,"failed":0}
SS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:41:54.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-499.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-499.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-499.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-499.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-499.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-499.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-499.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-499.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-499.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-499.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 25 11:42:00.301: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:00.304: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:00.306: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:00.309: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:00.316: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:00.319: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:00.322: INFO: Unable to read jessie_udp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:00.325: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:00.331: INFO: Lookups using dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local wheezy_udp@dns-test-service-2.dns-499.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-499.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local jessie_udp@dns-test-service-2.dns-499.svc.cluster.local jessie_tcp@dns-test-service-2.dns-499.svc.cluster.local]

May 25 11:42:05.336: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:05.340: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:05.344: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:05.347: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:05.356: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:05.358: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:05.361: INFO: Unable to read jessie_udp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:05.363: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:05.368: INFO: Lookups using dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local wheezy_udp@dns-test-service-2.dns-499.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-499.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local jessie_udp@dns-test-service-2.dns-499.svc.cluster.local jessie_tcp@dns-test-service-2.dns-499.svc.cluster.local]

May 25 11:42:10.336: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:10.340: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:10.343: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:10.347: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:10.357: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:10.360: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:10.363: INFO: Unable to read jessie_udp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:10.367: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:10.372: INFO: Lookups using dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local wheezy_udp@dns-test-service-2.dns-499.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-499.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local jessie_udp@dns-test-service-2.dns-499.svc.cluster.local jessie_tcp@dns-test-service-2.dns-499.svc.cluster.local]

May 25 11:42:15.335: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:15.340: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:15.342: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:15.348: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:15.357: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:15.359: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:15.362: INFO: Unable to read jessie_udp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:15.364: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:15.370: INFO: Lookups using dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local wheezy_udp@dns-test-service-2.dns-499.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-499.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local jessie_udp@dns-test-service-2.dns-499.svc.cluster.local jessie_tcp@dns-test-service-2.dns-499.svc.cluster.local]

May 25 11:42:20.337: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:20.341: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:20.344: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:20.347: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:20.356: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:20.359: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:20.362: INFO: Unable to read jessie_udp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:20.364: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:20.370: INFO: Lookups using dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local wheezy_udp@dns-test-service-2.dns-499.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-499.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local jessie_udp@dns-test-service-2.dns-499.svc.cluster.local jessie_tcp@dns-test-service-2.dns-499.svc.cluster.local]

May 25 11:42:25.382: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:25.386: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:25.390: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:25.394: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:25.404: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:25.407: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:25.409: INFO: Unable to read jessie_udp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:25.412: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-499.svc.cluster.local from pod dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d: the server could not find the requested resource (get pods dns-test-699ec066-3a47-46c1-96cc-898d03289c2d)
May 25 11:42:25.418: INFO: Lookups using dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local wheezy_udp@dns-test-service-2.dns-499.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-499.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-499.svc.cluster.local jessie_udp@dns-test-service-2.dns-499.svc.cluster.local jessie_tcp@dns-test-service-2.dns-499.svc.cluster.local]

May 25 11:42:30.370: INFO: DNS probes using dns-499/dns-test-699ec066-3a47-46c1-96cc-898d03289c2d succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:42:30.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-499" for this suite.

• [SLOW TEST:36.967 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":149,"skipped":2571,"failed":0}
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:42:31.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-4937
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 25 11:42:31.185: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 25 11:42:31.259: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 25 11:42:33.657: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 25 11:42:35.460: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 25 11:42:37.263: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:42:39.263: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:42:41.263: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:42:43.263: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:42:45.279: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 25 11:42:47.263: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 25 11:42:47.269: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 25 11:42:51.448: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.164:8080/dial?request=hostname&protocol=http&host=10.244.2.174&port=8080&tries=1'] Namespace:pod-network-test-4937 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 25 11:42:51.448: INFO: >>> kubeConfig: /root/.kube/config
I0525 11:42:51.483854       7 log.go:172] (0xc002d92a50) (0xc002996320) Create stream
I0525 11:42:51.483881       7 log.go:172] (0xc002d92a50) (0xc002996320) Stream added, broadcasting: 1
I0525 11:42:51.486162       7 log.go:172] (0xc002d92a50) Reply frame received for 1
I0525 11:42:51.486204       7 log.go:172] (0xc002d92a50) (0xc0029963c0) Create stream
I0525 11:42:51.486220       7 log.go:172] (0xc002d92a50) (0xc0029963c0) Stream added, broadcasting: 3
I0525 11:42:51.487239       7 log.go:172] (0xc002d92a50) Reply frame received for 3
I0525 11:42:51.487263       7 log.go:172] (0xc002d92a50) (0xc001964f00) Create stream
I0525 11:42:51.487273       7 log.go:172] (0xc002d92a50) (0xc001964f00) Stream added, broadcasting: 5
I0525 11:42:51.488148       7 log.go:172] (0xc002d92a50) Reply frame received for 5
I0525 11:42:51.575012       7 log.go:172] (0xc002d92a50) Data frame received for 3
I0525 11:42:51.575045       7 log.go:172] (0xc0029963c0) (3) Data frame handling
I0525 11:42:51.575067       7 log.go:172] (0xc0029963c0) (3) Data frame sent
I0525 11:42:51.575574       7 log.go:172] (0xc002d92a50) Data frame received for 5
I0525 11:42:51.575603       7 log.go:172] (0xc001964f00) (5) Data frame handling
I0525 11:42:51.575852       7 log.go:172] (0xc002d92a50) Data frame received for 3
I0525 11:42:51.575871       7 log.go:172] (0xc0029963c0) (3) Data frame handling
I0525 11:42:51.577641       7 log.go:172] (0xc002d92a50) Data frame received for 1
I0525 11:42:51.577676       7 log.go:172] (0xc002996320) (1) Data frame handling
I0525 11:42:51.577708       7 log.go:172] (0xc002996320) (1) Data frame sent
I0525 11:42:51.577744       7 log.go:172] (0xc002d92a50) (0xc002996320) Stream removed, broadcasting: 1
I0525 11:42:51.577879       7 log.go:172] (0xc002d92a50) (0xc002996320) Stream removed, broadcasting: 1
I0525 11:42:51.577949       7 log.go:172] (0xc002d92a50) (0xc0029963c0) Stream removed, broadcasting: 3
I0525 11:42:51.577973       7 log.go:172] (0xc002d92a50) (0xc001964f00) Stream removed, broadcasting: 5
I0525 11:42:51.578029       7 log.go:172] (0xc002d92a50) Go away received
May 25 11:42:51.578: INFO: Waiting for responses: map[]
May 25 11:42:51.581: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.164:8080/dial?request=hostname&protocol=http&host=10.244.1.163&port=8080&tries=1'] Namespace:pod-network-test-4937 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 25 11:42:51.581: INFO: >>> kubeConfig: /root/.kube/config
I0525 11:42:51.617752       7 log.go:172] (0xc00299c420) (0xc001965b80) Create stream
I0525 11:42:51.617777       7 log.go:172] (0xc00299c420) (0xc001965b80) Stream added, broadcasting: 1
I0525 11:42:51.620206       7 log.go:172] (0xc00299c420) Reply frame received for 1
I0525 11:42:51.620242       7 log.go:172] (0xc00299c420) (0xc0011839a0) Create stream
I0525 11:42:51.620256       7 log.go:172] (0xc00299c420) (0xc0011839a0) Stream added, broadcasting: 3
I0525 11:42:51.621332       7 log.go:172] (0xc00299c420) Reply frame received for 3
I0525 11:42:51.621366       7 log.go:172] (0xc00299c420) (0xc001183d60) Create stream
I0525 11:42:51.621388       7 log.go:172] (0xc00299c420) (0xc001183d60) Stream added, broadcasting: 5
I0525 11:42:51.622513       7 log.go:172] (0xc00299c420) Reply frame received for 5
I0525 11:42:51.703535       7 log.go:172] (0xc00299c420) Data frame received for 3
I0525 11:42:51.703560       7 log.go:172] (0xc0011839a0) (3) Data frame handling
I0525 11:42:51.703572       7 log.go:172] (0xc0011839a0) (3) Data frame sent
I0525 11:42:51.703780       7 log.go:172] (0xc00299c420) Data frame received for 3
I0525 11:42:51.703812       7 log.go:172] (0xc0011839a0) (3) Data frame handling
I0525 11:42:51.704136       7 log.go:172] (0xc00299c420) Data frame received for 5
I0525 11:42:51.704148       7 log.go:172] (0xc001183d60) (5) Data frame handling
I0525 11:42:51.706487       7 log.go:172] (0xc00299c420) Data frame received for 1
I0525 11:42:51.706513       7 log.go:172] (0xc001965b80) (1) Data frame handling
I0525 11:42:51.706529       7 log.go:172] (0xc001965b80) (1) Data frame sent
I0525 11:42:51.706575       7 log.go:172] (0xc00299c420) (0xc001965b80) Stream removed, broadcasting: 1
I0525 11:42:51.706599       7 log.go:172] (0xc00299c420) Go away received
I0525 11:42:51.706699       7 log.go:172] (0xc00299c420) (0xc001965b80) Stream removed, broadcasting: 1
I0525 11:42:51.706733       7 log.go:172] (0xc00299c420) (0xc0011839a0) Stream removed, broadcasting: 3
I0525 11:42:51.706747       7 log.go:172] (0xc00299c420) (0xc001183d60) Stream removed, broadcasting: 5
May 25 11:42:51.706: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:42:51.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4937" for this suite.

• [SLOW TEST:20.673 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2571,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:42:51.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-5x8v
STEP: Creating a pod to test atomic-volume-subpath
May 25 11:42:51.876: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5x8v" in namespace "subpath-3793" to be "Succeeded or Failed"
May 25 11:42:51.879: INFO: Pod "pod-subpath-test-configmap-5x8v": Phase="Pending", Reason="", readiness=false. Elapsed: 3.099208ms
May 25 11:42:53.986: INFO: Pod "pod-subpath-test-configmap-5x8v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11028434s
May 25 11:42:55.991: INFO: Pod "pod-subpath-test-configmap-5x8v": Phase="Running", Reason="", readiness=true. Elapsed: 4.114969595s
May 25 11:42:58.010: INFO: Pod "pod-subpath-test-configmap-5x8v": Phase="Running", Reason="", readiness=true. Elapsed: 6.134173127s
May 25 11:43:00.083: INFO: Pod "pod-subpath-test-configmap-5x8v": Phase="Running", Reason="", readiness=true. Elapsed: 8.206577629s
May 25 11:43:02.087: INFO: Pod "pod-subpath-test-configmap-5x8v": Phase="Running", Reason="", readiness=true. Elapsed: 10.211040511s
May 25 11:43:04.091: INFO: Pod "pod-subpath-test-configmap-5x8v": Phase="Running", Reason="", readiness=true. Elapsed: 12.214444635s
May 25 11:43:06.095: INFO: Pod "pod-subpath-test-configmap-5x8v": Phase="Running", Reason="", readiness=true. Elapsed: 14.218529657s
May 25 11:43:08.098: INFO: Pod "pod-subpath-test-configmap-5x8v": Phase="Running", Reason="", readiness=true. Elapsed: 16.222365713s
May 25 11:43:10.103: INFO: Pod "pod-subpath-test-configmap-5x8v": Phase="Running", Reason="", readiness=true. Elapsed: 18.227094957s
May 25 11:43:12.107: INFO: Pod "pod-subpath-test-configmap-5x8v": Phase="Running", Reason="", readiness=true. Elapsed: 20.23116063s
May 25 11:43:14.111: INFO: Pod "pod-subpath-test-configmap-5x8v": Phase="Running", Reason="", readiness=true. Elapsed: 22.235299305s
May 25 11:43:16.116: INFO: Pod "pod-subpath-test-configmap-5x8v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.239889039s
STEP: Saw pod success
May 25 11:43:16.116: INFO: Pod "pod-subpath-test-configmap-5x8v" satisfied condition "Succeeded or Failed"
May 25 11:43:16.120: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-5x8v container test-container-subpath-configmap-5x8v: 
STEP: delete the pod
May 25 11:43:16.165: INFO: Waiting for pod pod-subpath-test-configmap-5x8v to disappear
May 25 11:43:16.180: INFO: Pod pod-subpath-test-configmap-5x8v no longer exists
STEP: Deleting pod pod-subpath-test-configmap-5x8v
May 25 11:43:16.180: INFO: Deleting pod "pod-subpath-test-configmap-5x8v" in namespace "subpath-3793"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:43:16.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3793" for this suite.

• [SLOW TEST:24.492 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":151,"skipped":2586,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:43:16.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:43:16.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:43:20.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7567" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2606,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:43:20.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
May 25 11:43:20.431: INFO: Waiting up to 5m0s for pod "pod-a8b1ca69-9a8b-4541-b707-06494ce3aad6" in namespace "emptydir-4768" to be "Succeeded or Failed"
May 25 11:43:20.440: INFO: Pod "pod-a8b1ca69-9a8b-4541-b707-06494ce3aad6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.025654ms
May 25 11:43:22.454: INFO: Pod "pod-a8b1ca69-9a8b-4541-b707-06494ce3aad6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022784153s
May 25 11:43:24.459: INFO: Pod "pod-a8b1ca69-9a8b-4541-b707-06494ce3aad6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027658557s
STEP: Saw pod success
May 25 11:43:24.459: INFO: Pod "pod-a8b1ca69-9a8b-4541-b707-06494ce3aad6" satisfied condition "Succeeded or Failed"
May 25 11:43:24.462: INFO: Trying to get logs from node kali-worker2 pod pod-a8b1ca69-9a8b-4541-b707-06494ce3aad6 container test-container: 
STEP: delete the pod
May 25 11:43:24.515: INFO: Waiting for pod pod-a8b1ca69-9a8b-4541-b707-06494ce3aad6 to disappear
May 25 11:43:24.536: INFO: Pod pod-a8b1ca69-9a8b-4541-b707-06494ce3aad6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:43:24.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4768" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":153,"skipped":2626,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:43:24.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on tmpfs
May 25 11:43:24.642: INFO: Waiting up to 5m0s for pod "pod-76c12418-9fb0-43da-8c67-8923ba6deb16" in namespace "emptydir-7043" to be "Succeeded or Failed"
May 25 11:43:24.650: INFO: Pod "pod-76c12418-9fb0-43da-8c67-8923ba6deb16": Phase="Pending", Reason="", readiness=false. Elapsed: 8.367271ms
May 25 11:43:26.655: INFO: Pod "pod-76c12418-9fb0-43da-8c67-8923ba6deb16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012724459s
May 25 11:43:29.187: INFO: Pod "pod-76c12418-9fb0-43da-8c67-8923ba6deb16": Phase="Running", Reason="", readiness=true. Elapsed: 4.544704097s
May 25 11:43:31.598: INFO: Pod "pod-76c12418-9fb0-43da-8c67-8923ba6deb16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.956175036s
STEP: Saw pod success
May 25 11:43:31.598: INFO: Pod "pod-76c12418-9fb0-43da-8c67-8923ba6deb16" satisfied condition "Succeeded or Failed"
May 25 11:43:31.602: INFO: Trying to get logs from node kali-worker pod pod-76c12418-9fb0-43da-8c67-8923ba6deb16 container test-container: 
STEP: delete the pod
May 25 11:43:32.374: INFO: Waiting for pod pod-76c12418-9fb0-43da-8c67-8923ba6deb16 to disappear
May 25 11:43:32.778: INFO: Pod pod-76c12418-9fb0-43da-8c67-8923ba6deb16 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:43:32.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7043" for this suite.

• [SLOW TEST:9.416 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2628,"failed":0}
SSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:43:33.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:43:35.737: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-1790ce48-9b5f-4c8d-9402-5054c391bf2e" in namespace "security-context-test-8913" to be "Succeeded or Failed"
May 25 11:43:36.059: INFO: Pod "alpine-nnp-false-1790ce48-9b5f-4c8d-9402-5054c391bf2e": Phase="Pending", Reason="", readiness=false. Elapsed: 321.774858ms
May 25 11:43:38.063: INFO: Pod "alpine-nnp-false-1790ce48-9b5f-4c8d-9402-5054c391bf2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326627675s
May 25 11:43:40.068: INFO: Pod "alpine-nnp-false-1790ce48-9b5f-4c8d-9402-5054c391bf2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.33108107s
May 25 11:43:40.068: INFO: Pod "alpine-nnp-false-1790ce48-9b5f-4c8d-9402-5054c391bf2e" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:43:40.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8913" for this suite.

• [SLOW TEST:6.101 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2633,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:43:40.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's args
May 25 11:43:40.503: INFO: Waiting up to 5m0s for pod "var-expansion-bd6849b8-4ee6-4c54-a03b-b39f1d89386d" in namespace "var-expansion-2605" to be "Succeeded or Failed"
May 25 11:43:40.560: INFO: Pod "var-expansion-bd6849b8-4ee6-4c54-a03b-b39f1d89386d": Phase="Pending", Reason="", readiness=false. Elapsed: 56.133819ms
May 25 11:43:42.563: INFO: Pod "var-expansion-bd6849b8-4ee6-4c54-a03b-b39f1d89386d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059428174s
May 25 11:43:44.567: INFO: Pod "var-expansion-bd6849b8-4ee6-4c54-a03b-b39f1d89386d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063803958s
STEP: Saw pod success
May 25 11:43:44.567: INFO: Pod "var-expansion-bd6849b8-4ee6-4c54-a03b-b39f1d89386d" satisfied condition "Succeeded or Failed"
May 25 11:43:44.571: INFO: Trying to get logs from node kali-worker2 pod var-expansion-bd6849b8-4ee6-4c54-a03b-b39f1d89386d container dapi-container: 
STEP: delete the pod
May 25 11:43:44.638: INFO: Waiting for pod var-expansion-bd6849b8-4ee6-4c54-a03b-b39f1d89386d to disappear
May 25 11:43:44.651: INFO: Pod var-expansion-bd6849b8-4ee6-4c54-a03b-b39f1d89386d no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:43:44.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2605" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2640,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:43:44.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-secret-dnt7
STEP: Creating a pod to test atomic-volume-subpath
May 25 11:43:44.767: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-dnt7" in namespace "subpath-7613" to be "Succeeded or Failed"
May 25 11:43:44.784: INFO: Pod "pod-subpath-test-secret-dnt7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.147773ms
May 25 11:43:46.789: INFO: Pod "pod-subpath-test-secret-dnt7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021462765s
May 25 11:43:48.794: INFO: Pod "pod-subpath-test-secret-dnt7": Phase="Running", Reason="", readiness=true. Elapsed: 4.026496344s
May 25 11:43:50.799: INFO: Pod "pod-subpath-test-secret-dnt7": Phase="Running", Reason="", readiness=true. Elapsed: 6.031736902s
May 25 11:43:52.803: INFO: Pod "pod-subpath-test-secret-dnt7": Phase="Running", Reason="", readiness=true. Elapsed: 8.035434086s
May 25 11:43:54.842: INFO: Pod "pod-subpath-test-secret-dnt7": Phase="Running", Reason="", readiness=true. Elapsed: 10.075365954s
May 25 11:43:56.847: INFO: Pod "pod-subpath-test-secret-dnt7": Phase="Running", Reason="", readiness=true. Elapsed: 12.079763153s
May 25 11:43:58.851: INFO: Pod "pod-subpath-test-secret-dnt7": Phase="Running", Reason="", readiness=true. Elapsed: 14.083954373s
May 25 11:44:00.855: INFO: Pod "pod-subpath-test-secret-dnt7": Phase="Running", Reason="", readiness=true. Elapsed: 16.08833667s
May 25 11:44:02.860: INFO: Pod "pod-subpath-test-secret-dnt7": Phase="Running", Reason="", readiness=true. Elapsed: 18.092849928s
May 25 11:44:04.863: INFO: Pod "pod-subpath-test-secret-dnt7": Phase="Running", Reason="", readiness=true. Elapsed: 20.095868227s
May 25 11:44:06.867: INFO: Pod "pod-subpath-test-secret-dnt7": Phase="Running", Reason="", readiness=true. Elapsed: 22.099786131s
May 25 11:44:08.872: INFO: Pod "pod-subpath-test-secret-dnt7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.104460335s
STEP: Saw pod success
May 25 11:44:08.872: INFO: Pod "pod-subpath-test-secret-dnt7" satisfied condition "Succeeded or Failed"
May 25 11:44:08.875: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-secret-dnt7 container test-container-subpath-secret-dnt7: 
STEP: delete the pod
May 25 11:44:08.960: INFO: Waiting for pod pod-subpath-test-secret-dnt7 to disappear
May 25 11:44:08.974: INFO: Pod pod-subpath-test-secret-dnt7 no longer exists
STEP: Deleting pod pod-subpath-test-secret-dnt7
May 25 11:44:08.974: INFO: Deleting pod "pod-subpath-test-secret-dnt7" in namespace "subpath-7613"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:44:08.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7613" for this suite.

• [SLOW TEST:24.308 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":157,"skipped":2665,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:44:08.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
May 25 11:44:09.139: INFO: Waiting up to 5m0s for pod "pod-d238574b-1974-40df-92a7-21f38dfc9add" in namespace "emptydir-4345" to be "Succeeded or Failed"
May 25 11:44:09.186: INFO: Pod "pod-d238574b-1974-40df-92a7-21f38dfc9add": Phase="Pending", Reason="", readiness=false. Elapsed: 46.828066ms
May 25 11:44:11.190: INFO: Pod "pod-d238574b-1974-40df-92a7-21f38dfc9add": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050594247s
May 25 11:44:13.296: INFO: Pod "pod-d238574b-1974-40df-92a7-21f38dfc9add": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156550564s
STEP: Saw pod success
May 25 11:44:13.296: INFO: Pod "pod-d238574b-1974-40df-92a7-21f38dfc9add" satisfied condition "Succeeded or Failed"
May 25 11:44:13.299: INFO: Trying to get logs from node kali-worker2 pod pod-d238574b-1974-40df-92a7-21f38dfc9add container test-container: 
STEP: delete the pod
May 25 11:44:13.616: INFO: Waiting for pod pod-d238574b-1974-40df-92a7-21f38dfc9add to disappear
May 25 11:44:13.675: INFO: Pod pod-d238574b-1974-40df-92a7-21f38dfc9add no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:44:13.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4345" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":158,"skipped":2665,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:44:13.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:44:14.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-9534" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":159,"skipped":2673,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:44:14.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:44:14.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
May 25 11:44:17.358: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4217 create -f -'
May 25 11:44:24.061: INFO: stderr: ""
May 25 11:44:24.061: INFO: stdout: "e2e-test-crd-publish-openapi-1829-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
May 25 11:44:24.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4217 delete e2e-test-crd-publish-openapi-1829-crds test-foo'
May 25 11:44:24.323: INFO: stderr: ""
May 25 11:44:24.323: INFO: stdout: "e2e-test-crd-publish-openapi-1829-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
May 25 11:44:24.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4217 apply -f -'
May 25 11:44:24.610: INFO: stderr: ""
May 25 11:44:24.610: INFO: stdout: "e2e-test-crd-publish-openapi-1829-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
May 25 11:44:24.610: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4217 delete e2e-test-crd-publish-openapi-1829-crds test-foo'
May 25 11:44:24.725: INFO: stderr: ""
May 25 11:44:24.725: INFO: stdout: "e2e-test-crd-publish-openapi-1829-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
May 25 11:44:24.725: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4217 create -f -'
May 25 11:44:24.954: INFO: rc: 1
May 25 11:44:24.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4217 apply -f -'
May 25 11:44:25.196: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
May 25 11:44:25.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4217 create -f -'
May 25 11:44:25.434: INFO: rc: 1
May 25 11:44:25.434: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4217 apply -f -'
May 25 11:44:25.700: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
May 25 11:44:25.700: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1829-crds'
May 25 11:44:25.982: INFO: stderr: ""
May 25 11:44:25.982: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1829-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
May 25 11:44:25.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1829-crds.metadata'
May 25 11:44:26.234: INFO: stderr: ""
May 25 11:44:26.234: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1829-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
May 25 11:44:26.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1829-crds.spec'
May 25 11:44:26.491: INFO: stderr: ""
May 25 11:44:26.491: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1829-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
May 25 11:44:26.491: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1829-crds.spec.bars'
May 25 11:44:26.722: INFO: stderr: ""
May 25 11:44:26.722: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1829-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
May 25 11:44:26.722: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1829-crds.spec.bars2'
May 25 11:44:26.950: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:44:29.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4217" for this suite.

• [SLOW TEST:15.692 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":160,"skipped":2690,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:44:29.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:44:29.955: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:44:36.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6176" for this suite.

• [SLOW TEST:6.447 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":275,"completed":161,"skipped":2699,"failed":0}
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:44:36.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override arguments
May 25 11:44:36.541: INFO: Waiting up to 5m0s for pod "client-containers-9de33de3-ba7a-4329-8a1a-f83811eda627" in namespace "containers-3416" to be "Succeeded or Failed"
May 25 11:44:36.547: INFO: Pod "client-containers-9de33de3-ba7a-4329-8a1a-f83811eda627": Phase="Pending", Reason="", readiness=false. Elapsed: 6.53291ms
May 25 11:44:38.552: INFO: Pod "client-containers-9de33de3-ba7a-4329-8a1a-f83811eda627": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010745171s
May 25 11:44:40.556: INFO: Pod "client-containers-9de33de3-ba7a-4329-8a1a-f83811eda627": Phase="Running", Reason="", readiness=true. Elapsed: 4.01541145s
May 25 11:44:42.561: INFO: Pod "client-containers-9de33de3-ba7a-4329-8a1a-f83811eda627": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020192464s
STEP: Saw pod success
May 25 11:44:42.561: INFO: Pod "client-containers-9de33de3-ba7a-4329-8a1a-f83811eda627" satisfied condition "Succeeded or Failed"
May 25 11:44:42.564: INFO: Trying to get logs from node kali-worker2 pod client-containers-9de33de3-ba7a-4329-8a1a-f83811eda627 container test-container: 
STEP: delete the pod
May 25 11:44:42.620: INFO: Waiting for pod client-containers-9de33de3-ba7a-4329-8a1a-f83811eda627 to disappear
May 25 11:44:42.631: INFO: Pod client-containers-9de33de3-ba7a-4329-8a1a-f83811eda627 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:44:42.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3416" for this suite.

• [SLOW TEST:6.295 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":162,"skipped":2700,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:44:42.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 25 11:44:43.935: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 25 11:44:46.108: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003883, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003883, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003884, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003883, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 25 11:44:49.306: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:44:59.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4180" for this suite.
STEP: Destroying namespace "webhook-4180-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:17.078 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":163,"skipped":2702,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:44:59.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-9km7
STEP: Creating a pod to test atomic-volume-subpath
May 25 11:44:59.880: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9km7" in namespace "subpath-3006" to be "Succeeded or Failed"
May 25 11:44:59.926: INFO: Pod "pod-subpath-test-configmap-9km7": Phase="Pending", Reason="", readiness=false. Elapsed: 45.672093ms
May 25 11:45:01.931: INFO: Pod "pod-subpath-test-configmap-9km7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050609443s
May 25 11:45:03.937: INFO: Pod "pod-subpath-test-configmap-9km7": Phase="Running", Reason="", readiness=true. Elapsed: 4.056641611s
May 25 11:45:05.944: INFO: Pod "pod-subpath-test-configmap-9km7": Phase="Running", Reason="", readiness=true. Elapsed: 6.063218676s
May 25 11:45:07.948: INFO: Pod "pod-subpath-test-configmap-9km7": Phase="Running", Reason="", readiness=true. Elapsed: 8.067785674s
May 25 11:45:09.953: INFO: Pod "pod-subpath-test-configmap-9km7": Phase="Running", Reason="", readiness=true. Elapsed: 10.072346452s
May 25 11:45:11.957: INFO: Pod "pod-subpath-test-configmap-9km7": Phase="Running", Reason="", readiness=true. Elapsed: 12.076882649s
May 25 11:45:13.962: INFO: Pod "pod-subpath-test-configmap-9km7": Phase="Running", Reason="", readiness=true. Elapsed: 14.081362866s
May 25 11:45:15.967: INFO: Pod "pod-subpath-test-configmap-9km7": Phase="Running", Reason="", readiness=true. Elapsed: 16.086042876s
May 25 11:45:17.971: INFO: Pod "pod-subpath-test-configmap-9km7": Phase="Running", Reason="", readiness=true. Elapsed: 18.090629036s
May 25 11:45:19.975: INFO: Pod "pod-subpath-test-configmap-9km7": Phase="Running", Reason="", readiness=true. Elapsed: 20.094750785s
May 25 11:45:21.980: INFO: Pod "pod-subpath-test-configmap-9km7": Phase="Running", Reason="", readiness=true. Elapsed: 22.099138119s
May 25 11:45:23.984: INFO: Pod "pod-subpath-test-configmap-9km7": Phase="Running", Reason="", readiness=true. Elapsed: 24.103955263s
May 25 11:45:25.989: INFO: Pod "pod-subpath-test-configmap-9km7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.108494723s
STEP: Saw pod success
May 25 11:45:25.989: INFO: Pod "pod-subpath-test-configmap-9km7" satisfied condition "Succeeded or Failed"
May 25 11:45:25.991: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-9km7 container test-container-subpath-configmap-9km7: 
STEP: delete the pod
May 25 11:45:26.018: INFO: Waiting for pod pod-subpath-test-configmap-9km7 to disappear
May 25 11:45:26.038: INFO: Pod pod-subpath-test-configmap-9km7 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-9km7
May 25 11:45:26.038: INFO: Deleting pod "pod-subpath-test-configmap-9km7" in namespace "subpath-3006"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:45:26.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3006" for this suite.

• [SLOW TEST:26.327 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":164,"skipped":2781,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:45:26.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 25 11:45:26.148: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99f70578-0514-47f6-9bde-4aaaeb9fc282" in namespace "downward-api-9292" to be "Succeeded or Failed"
May 25 11:45:26.160: INFO: Pod "downwardapi-volume-99f70578-0514-47f6-9bde-4aaaeb9fc282": Phase="Pending", Reason="", readiness=false. Elapsed: 12.17823ms
May 25 11:45:28.165: INFO: Pod "downwardapi-volume-99f70578-0514-47f6-9bde-4aaaeb9fc282": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016825538s
May 25 11:45:30.170: INFO: Pod "downwardapi-volume-99f70578-0514-47f6-9bde-4aaaeb9fc282": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021529334s
STEP: Saw pod success
May 25 11:45:30.170: INFO: Pod "downwardapi-volume-99f70578-0514-47f6-9bde-4aaaeb9fc282" satisfied condition "Succeeded or Failed"
May 25 11:45:30.173: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-99f70578-0514-47f6-9bde-4aaaeb9fc282 container client-container: 
STEP: delete the pod
May 25 11:45:30.211: INFO: Waiting for pod downwardapi-volume-99f70578-0514-47f6-9bde-4aaaeb9fc282 to disappear
May 25 11:45:30.214: INFO: Pod downwardapi-volume-99f70578-0514-47f6-9bde-4aaaeb9fc282 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:45:30.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9292" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2784,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:45:30.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206
STEP: creating the pod
May 25 11:45:30.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4024'
May 25 11:45:30.631: INFO: stderr: ""
May 25 11:45:30.631: INFO: stdout: "pod/pause created\n"
May 25 11:45:30.631: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
May 25 11:45:30.631: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4024" to be "running and ready"
May 25 11:45:30.670: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 38.527465ms
May 25 11:45:32.766: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134925077s
May 25 11:45:34.886: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.254893958s
May 25 11:45:34.886: INFO: Pod "pause" satisfied condition "running and ready"
May 25 11:45:34.886: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: adding the label testing-label with value testing-label-value to a pod
May 25 11:45:34.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4024'
May 25 11:45:34.989: INFO: stderr: ""
May 25 11:45:34.989: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
May 25 11:45:34.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4024'
May 25 11:45:35.237: INFO: stderr: ""
May 25 11:45:35.237: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    testing-label-value\n"
STEP: removing the label testing-label of a pod
May 25 11:45:35.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4024'
May 25 11:45:35.536: INFO: stderr: ""
May 25 11:45:35.536: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
May 25 11:45:35.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4024'
May 25 11:45:35.631: INFO: stderr: ""
May 25 11:45:35.631: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213
STEP: using delete to clean up resources
May 25 11:45:35.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4024'
May 25 11:45:35.795: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 25 11:45:35.795: INFO: stdout: "pod \"pause\" force deleted\n"
May 25 11:45:35.795: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4024'
May 25 11:45:35.904: INFO: stderr: "No resources found in kubectl-4024 namespace.\n"
May 25 11:45:35.904: INFO: stdout: ""
May 25 11:45:35.904: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4024 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 25 11:45:36.151: INFO: stderr: ""
May 25 11:45:36.151: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:45:36.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4024" for this suite.

• [SLOW TEST:6.119 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":275,"completed":166,"skipped":2796,"failed":0}
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:45:36.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May 25 11:45:36.852: INFO: PodSpec: initContainers in spec.initContainers
May 25 11:46:29.372: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-de7ba164-d6d0-48ec-a717-0435ede75561", GenerateName:"", Namespace:"init-container-9549", SelfLink:"/api/v1/namespaces/init-container-9549/pods/pod-init-de7ba164-d6d0-48ec-a717-0435ede75561", UID:"3f685ee1-8602-458b-8d44-05d14cfd21a7", ResourceVersion:"7181473", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726003936, loc:(*time.Location)(0x7b200c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"852281964"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b8d220), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b8d260)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b8d2a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b8d2e0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-qsnvg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc004d01fc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qsnvg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qsnvg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qsnvg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00382ed78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000e577a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00382ee00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00382ee20)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00382ee28), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00382ee2c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003937, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003937, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003937, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726003936, loc:(*time.Location)(0x7b200c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.18", PodIP:"10.244.1.175", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.175"}}, StartTime:(*v1.Time)(0xc002b8d320), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000e57880)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000e578f0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://6169fc7ab4b9c33ef78ce190a8a1149b3b9ab20c3745b615849aa6ee4949cf14", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b8d3a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b8d360), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00382eeaf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:46:29.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9549" for this suite.

• [SLOW TEST:53.079 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":167,"skipped":2800,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:46:29.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:46:29.545: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
May 25 11:46:29.596: INFO: Number of nodes with available pods: 0
May 25 11:46:29.596: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
May 25 11:46:29.691: INFO: Number of nodes with available pods: 0
May 25 11:46:29.691: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:46:30.696: INFO: Number of nodes with available pods: 0
May 25 11:46:30.696: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:46:31.803: INFO: Number of nodes with available pods: 0
May 25 11:46:31.803: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:46:32.696: INFO: Number of nodes with available pods: 0
May 25 11:46:32.696: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:46:33.697: INFO: Number of nodes with available pods: 1
May 25 11:46:33.697: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
May 25 11:46:33.740: INFO: Number of nodes with available pods: 1
May 25 11:46:33.740: INFO: Number of running nodes: 0, number of available pods: 1
May 25 11:46:34.744: INFO: Number of nodes with available pods: 0
May 25 11:46:34.744: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
May 25 11:46:34.809: INFO: Number of nodes with available pods: 0
May 25 11:46:34.809: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:46:35.813: INFO: Number of nodes with available pods: 0
May 25 11:46:35.813: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:46:36.814: INFO: Number of nodes with available pods: 0
May 25 11:46:36.814: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:46:37.812: INFO: Number of nodes with available pods: 0
May 25 11:46:37.812: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:46:38.814: INFO: Number of nodes with available pods: 0
May 25 11:46:38.814: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:46:39.812: INFO: Number of nodes with available pods: 0
May 25 11:46:39.812: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:46:40.813: INFO: Number of nodes with available pods: 0
May 25 11:46:40.813: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:46:41.813: INFO: Number of nodes with available pods: 0
May 25 11:46:41.813: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:46:42.845: INFO: Number of nodes with available pods: 0
May 25 11:46:42.845: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:46:43.813: INFO: Number of nodes with available pods: 0
May 25 11:46:43.813: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:46:44.813: INFO: Number of nodes with available pods: 0
May 25 11:46:44.813: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:46:45.833: INFO: Number of nodes with available pods: 0
May 25 11:46:45.833: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:46:46.822: INFO: Number of nodes with available pods: 0
May 25 11:46:46.822: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:46:47.813: INFO: Number of nodes with available pods: 1
May 25 11:46:47.813: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5704, will wait for the garbage collector to delete the pods
May 25 11:46:47.876: INFO: Deleting DaemonSet.extensions daemon-set took: 6.703503ms
May 25 11:46:48.177: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.551627ms
May 25 11:47:03.480: INFO: Number of nodes with available pods: 0
May 25 11:47:03.480: INFO: Number of running nodes: 0, number of available pods: 0
May 25 11:47:03.484: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5704/daemonsets","resourceVersion":"7181635"},"items":null}

May 25 11:47:03.486: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5704/pods","resourceVersion":"7181635"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:47:03.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5704" for this suite.

• [SLOW TEST:34.110 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":168,"skipped":2810,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:47:03.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:47:03.657: INFO: Pod name cleanup-pod: Found 0 pods out of 1
May 25 11:47:08.748: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
May 25 11:47:08.748: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May 25 11:47:14.901: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-504 /apis/apps/v1/namespaces/deployment-504/deployments/test-cleanup-deployment 6fa3e649-ef3b-48e3-a98e-43018884e3b1 7181728 1 2020-05-25 11:47:08 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  [{e2e.test Update apps/v1 2020-05-25 11:47:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-25 11:47:12 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003219b08  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-25 11:47:08 +0000 UTC,LastTransitionTime:2020-05-25 11:47:08 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-b4867b47f" has successfully progressed.,LastUpdateTime:2020-05-25 11:47:12 +0000 UTC,LastTransitionTime:2020-05-25 11:47:08 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

May 25 11:47:14.904: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f  deployment-504 /apis/apps/v1/namespaces/deployment-504/replicasets/test-cleanup-deployment-b4867b47f 85550d55-ee2c-444d-980e-ed21cc083498 7181716 1 2020-05-25 11:47:08 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 6fa3e649-ef3b-48e3-a98e-43018884e3b1 0xc003b5a680 0xc003b5a681}] []  [{kube-controller-manager Update apps/v1 2020-05-25 11:47:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 102 97 51 101 54 52 57 45 101 102 51 98 45 52 56 101 51 45 97 57 56 101 45 52 51 48 49 56 56 56 52 101 51 98 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b5a6f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
May 25 11:47:14.907: INFO: Pod "test-cleanup-deployment-b4867b47f-dnqhx" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-dnqhx test-cleanup-deployment-b4867b47f- deployment-504 /api/v1/namespaces/deployment-504/pods/test-cleanup-deployment-b4867b47f-dnqhx 01ccf715-ed81-4c3f-a5a9-5f2b9d950554 7181715 0 2020-05-25 11:47:08 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f 85550d55-ee2c-444d-980e-ed21cc083498 0xc003b5aaf0 0xc003b5aaf1}] []  [{kube-controller-manager Update v1 2020-05-25 11:47:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 53 53 53 48 100 53 53 45 101 101 50 99 45 52 52 52 100 45 57 56 48 101 45 101 100 50 49 99 99 48 56 51 52 57 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 11:47:12 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 55 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n8m4l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n8m4l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n8m4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:47:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:47:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.179,StartTime:2020-05-25 11:47:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 11:47:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://a37083e03157f595689286459fb71af5506845a2704ab85a8e53361d1fcdfad4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.179,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:47:14.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-504" for this suite.

• [SLOW TEST:11.384 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":169,"skipped":2818,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:47:14.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:47:15.048: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:47:16.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4085" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":275,"completed":170,"skipped":2838,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:47:16.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:47:16.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
May 25 11:47:19.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2855 create -f -'
May 25 11:47:23.391: INFO: stderr: ""
May 25 11:47:23.391: INFO: stdout: "e2e-test-crd-publish-openapi-6038-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
May 25 11:47:23.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2855 delete e2e-test-crd-publish-openapi-6038-crds test-cr'
May 25 11:47:23.486: INFO: stderr: ""
May 25 11:47:23.486: INFO: stdout: "e2e-test-crd-publish-openapi-6038-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
May 25 11:47:23.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2855 apply -f -'
May 25 11:47:23.866: INFO: stderr: ""
May 25 11:47:23.866: INFO: stdout: "e2e-test-crd-publish-openapi-6038-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
May 25 11:47:23.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2855 delete e2e-test-crd-publish-openapi-6038-crds test-cr'
May 25 11:47:24.002: INFO: stderr: ""
May 25 11:47:24.002: INFO: stdout: "e2e-test-crd-publish-openapi-6038-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
May 25 11:47:24.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6038-crds'
May 25 11:47:24.254: INFO: stderr: ""
May 25 11:47:24.254: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6038-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:47:27.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2855" for this suite.

• [SLOW TEST:10.971 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":171,"skipped":2851,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:47:27.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating pod
May 25 11:47:31.401: INFO: Pod pod-hostip-4291c3d2-9d33-4323-a40e-33cade12e912 has hostIP: 172.17.0.18
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:47:31.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9077" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":2859,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:47:31.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-3856d646-ab1f-4498-b3f4-ba6bac244487
STEP: Creating a pod to test consume secrets
May 25 11:47:32.121: INFO: Waiting up to 5m0s for pod "pod-secrets-aae7f47c-60a8-427e-9d3a-bd904be626cf" in namespace "secrets-6600" to be "Succeeded or Failed"
May 25 11:47:32.190: INFO: Pod "pod-secrets-aae7f47c-60a8-427e-9d3a-bd904be626cf": Phase="Pending", Reason="", readiness=false. Elapsed: 68.665556ms
May 25 11:47:34.194: INFO: Pod "pod-secrets-aae7f47c-60a8-427e-9d3a-bd904be626cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072462911s
May 25 11:47:36.198: INFO: Pod "pod-secrets-aae7f47c-60a8-427e-9d3a-bd904be626cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076599146s
May 25 11:47:38.312: INFO: Pod "pod-secrets-aae7f47c-60a8-427e-9d3a-bd904be626cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.190904766s
STEP: Saw pod success
May 25 11:47:38.312: INFO: Pod "pod-secrets-aae7f47c-60a8-427e-9d3a-bd904be626cf" satisfied condition "Succeeded or Failed"
May 25 11:47:38.316: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-aae7f47c-60a8-427e-9d3a-bd904be626cf container secret-volume-test: 
STEP: delete the pod
May 25 11:47:38.794: INFO: Waiting for pod pod-secrets-aae7f47c-60a8-427e-9d3a-bd904be626cf to disappear
May 25 11:47:38.820: INFO: Pod pod-secrets-aae7f47c-60a8-427e-9d3a-bd904be626cf no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:47:38.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6600" for this suite.

• [SLOW TEST:7.556 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":173,"skipped":2866,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:47:38.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
May 25 11:47:39.087: INFO: Waiting up to 5m0s for pod "pod-7b36ee2d-c007-46fe-89f0-5ab1c3db660f" in namespace "emptydir-1886" to be "Succeeded or Failed"
May 25 11:47:39.096: INFO: Pod "pod-7b36ee2d-c007-46fe-89f0-5ab1c3db660f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.03523ms
May 25 11:47:41.331: INFO: Pod "pod-7b36ee2d-c007-46fe-89f0-5ab1c3db660f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243993176s
May 25 11:47:43.335: INFO: Pod "pod-7b36ee2d-c007-46fe-89f0-5ab1c3db660f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.248153168s
STEP: Saw pod success
May 25 11:47:43.335: INFO: Pod "pod-7b36ee2d-c007-46fe-89f0-5ab1c3db660f" satisfied condition "Succeeded or Failed"
May 25 11:47:43.337: INFO: Trying to get logs from node kali-worker2 pod pod-7b36ee2d-c007-46fe-89f0-5ab1c3db660f container test-container: 
STEP: delete the pod
May 25 11:47:43.358: INFO: Waiting for pod pod-7b36ee2d-c007-46fe-89f0-5ab1c3db660f to disappear
May 25 11:47:43.381: INFO: Pod pod-7b36ee2d-c007-46fe-89f0-5ab1c3db660f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:47:43.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1886" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":2866,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:47:43.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:47:43.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:47:47.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1991" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":175,"skipped":2869,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:47:47.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-d067fc1f-4fc9-49a7-9e1d-657c989815e6
STEP: Creating a pod to test consume secrets
May 25 11:47:47.714: INFO: Waiting up to 5m0s for pod "pod-secrets-e29018f0-0941-45b8-bec8-07e15d25ceb2" in namespace "secrets-8720" to be "Succeeded or Failed"
May 25 11:47:47.722: INFO: Pod "pod-secrets-e29018f0-0941-45b8-bec8-07e15d25ceb2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.522094ms
May 25 11:47:49.726: INFO: Pod "pod-secrets-e29018f0-0941-45b8-bec8-07e15d25ceb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012523865s
May 25 11:47:51.773: INFO: Pod "pod-secrets-e29018f0-0941-45b8-bec8-07e15d25ceb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059348188s
STEP: Saw pod success
May 25 11:47:51.773: INFO: Pod "pod-secrets-e29018f0-0941-45b8-bec8-07e15d25ceb2" satisfied condition "Succeeded or Failed"
May 25 11:47:51.776: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-e29018f0-0941-45b8-bec8-07e15d25ceb2 container secret-env-test: 
STEP: delete the pod
May 25 11:47:51.866: INFO: Waiting for pod pod-secrets-e29018f0-0941-45b8-bec8-07e15d25ceb2 to disappear
May 25 11:47:51.893: INFO: Pod pod-secrets-e29018f0-0941-45b8-bec8-07e15d25ceb2 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:47:51.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8720" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":176,"skipped":2889,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:47:51.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3390.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3390.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3390.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3390.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 25 11:48:00.093: INFO: DNS probes using dns-test-e92ea36c-a849-444d-97c3-8c9b6e113998 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3390.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3390.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3390.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3390.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 25 11:48:08.273: INFO: File wheezy_udp@dns-test-service-3.dns-3390.svc.cluster.local from pod  dns-3390/dns-test-392c041c-44bf-4193-b4ad-a8f73ef4e0c1 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 25 11:48:08.276: INFO: File jessie_udp@dns-test-service-3.dns-3390.svc.cluster.local from pod  dns-3390/dns-test-392c041c-44bf-4193-b4ad-a8f73ef4e0c1 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 25 11:48:08.277: INFO: Lookups using dns-3390/dns-test-392c041c-44bf-4193-b4ad-a8f73ef4e0c1 failed for: [wheezy_udp@dns-test-service-3.dns-3390.svc.cluster.local jessie_udp@dns-test-service-3.dns-3390.svc.cluster.local]

May 25 11:48:13.282: INFO: File wheezy_udp@dns-test-service-3.dns-3390.svc.cluster.local from pod  dns-3390/dns-test-392c041c-44bf-4193-b4ad-a8f73ef4e0c1 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 25 11:48:13.286: INFO: File jessie_udp@dns-test-service-3.dns-3390.svc.cluster.local from pod  dns-3390/dns-test-392c041c-44bf-4193-b4ad-a8f73ef4e0c1 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 25 11:48:13.286: INFO: Lookups using dns-3390/dns-test-392c041c-44bf-4193-b4ad-a8f73ef4e0c1 failed for: [wheezy_udp@dns-test-service-3.dns-3390.svc.cluster.local jessie_udp@dns-test-service-3.dns-3390.svc.cluster.local]

May 25 11:48:18.282: INFO: File wheezy_udp@dns-test-service-3.dns-3390.svc.cluster.local from pod  dns-3390/dns-test-392c041c-44bf-4193-b4ad-a8f73ef4e0c1 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 25 11:48:18.287: INFO: File jessie_udp@dns-test-service-3.dns-3390.svc.cluster.local from pod  dns-3390/dns-test-392c041c-44bf-4193-b4ad-a8f73ef4e0c1 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 25 11:48:18.287: INFO: Lookups using dns-3390/dns-test-392c041c-44bf-4193-b4ad-a8f73ef4e0c1 failed for: [wheezy_udp@dns-test-service-3.dns-3390.svc.cluster.local jessie_udp@dns-test-service-3.dns-3390.svc.cluster.local]

May 25 11:48:23.281: INFO: File wheezy_udp@dns-test-service-3.dns-3390.svc.cluster.local from pod  dns-3390/dns-test-392c041c-44bf-4193-b4ad-a8f73ef4e0c1 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 25 11:48:23.285: INFO: File jessie_udp@dns-test-service-3.dns-3390.svc.cluster.local from pod  dns-3390/dns-test-392c041c-44bf-4193-b4ad-a8f73ef4e0c1 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 25 11:48:23.285: INFO: Lookups using dns-3390/dns-test-392c041c-44bf-4193-b4ad-a8f73ef4e0c1 failed for: [wheezy_udp@dns-test-service-3.dns-3390.svc.cluster.local jessie_udp@dns-test-service-3.dns-3390.svc.cluster.local]

May 25 11:48:28.285: INFO: File jessie_udp@dns-test-service-3.dns-3390.svc.cluster.local from pod  dns-3390/dns-test-392c041c-44bf-4193-b4ad-a8f73ef4e0c1 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 25 11:48:28.285: INFO: Lookups using dns-3390/dns-test-392c041c-44bf-4193-b4ad-a8f73ef4e0c1 failed for: [jessie_udp@dns-test-service-3.dns-3390.svc.cluster.local]

May 25 11:48:33.287: INFO: DNS probes using dns-test-392c041c-44bf-4193-b4ad-a8f73ef4e0c1 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3390.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3390.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3390.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3390.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 25 11:48:40.128: INFO: DNS probes using dns-test-e524ca01-ebb7-4731-97db-dbad2be18d5c succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:48:40.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3390" for this suite.

• [SLOW TEST:48.367 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":177,"skipped":2894,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:48:40.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 25 11:48:41.750: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 25 11:48:43.761: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004121, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004121, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004121, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004121, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 11:48:45.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004121, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004121, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004121, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004121, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 25 11:48:48.817: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:48:48.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5290" for this suite.
STEP: Destroying namespace "webhook-5290-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.844 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":178,"skipped":2919,"failed":0}
SSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:48:49.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:48:49.194: INFO: Creating ReplicaSet my-hostname-basic-075363c8-eefd-4a31-8e4e-74631f4012cf
May 25 11:48:49.222: INFO: Pod name my-hostname-basic-075363c8-eefd-4a31-8e4e-74631f4012cf: Found 0 pods out of 1
May 25 11:48:54.226: INFO: Pod name my-hostname-basic-075363c8-eefd-4a31-8e4e-74631f4012cf: Found 1 pods out of 1
May 25 11:48:54.226: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-075363c8-eefd-4a31-8e4e-74631f4012cf" is running
May 25 11:48:54.229: INFO: Pod "my-hostname-basic-075363c8-eefd-4a31-8e4e-74631f4012cf-nkvwn" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 11:48:49 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 11:48:52 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 11:48:52 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 11:48:49 +0000 UTC Reason: Message:}])
May 25 11:48:54.230: INFO: Trying to dial the pod
May 25 11:48:59.239: INFO: Controller my-hostname-basic-075363c8-eefd-4a31-8e4e-74631f4012cf: Got expected result from replica 1 [my-hostname-basic-075363c8-eefd-4a31-8e4e-74631f4012cf-nkvwn]: "my-hostname-basic-075363c8-eefd-4a31-8e4e-74631f4012cf-nkvwn", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:48:59.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-8905" for this suite.

• [SLOW TEST:10.135 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":179,"skipped":2924,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:48:59.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-762bb4f4-a7e9-47c6-bd5f-73b747b4612b
STEP: Creating a pod to test consume configMaps
May 25 11:48:59.374: INFO: Waiting up to 5m0s for pod "pod-configmaps-97c162fe-d687-4d0a-a949-0face9231c1c" in namespace "configmap-4811" to be "Succeeded or Failed"
May 25 11:48:59.394: INFO: Pod "pod-configmaps-97c162fe-d687-4d0a-a949-0face9231c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.457489ms
May 25 11:49:01.403: INFO: Pod "pod-configmaps-97c162fe-d687-4d0a-a949-0face9231c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029493605s
May 25 11:49:03.407: INFO: Pod "pod-configmaps-97c162fe-d687-4d0a-a949-0face9231c1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032970597s
STEP: Saw pod success
May 25 11:49:03.407: INFO: Pod "pod-configmaps-97c162fe-d687-4d0a-a949-0face9231c1c" satisfied condition "Succeeded or Failed"
May 25 11:49:03.409: INFO: Trying to get logs from node kali-worker pod pod-configmaps-97c162fe-d687-4d0a-a949-0face9231c1c container configmap-volume-test: 
STEP: delete the pod
May 25 11:49:03.535: INFO: Waiting for pod pod-configmaps-97c162fe-d687-4d0a-a949-0face9231c1c to disappear
May 25 11:49:03.571: INFO: Pod pod-configmaps-97c162fe-d687-4d0a-a949-0face9231c1c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:49:03.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4811" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":180,"skipped":2938,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:49:03.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-projected-rgbc
STEP: Creating a pod to test atomic-volume-subpath
May 25 11:49:03.770: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-rgbc" in namespace "subpath-2538" to be "Succeeded or Failed"
May 25 11:49:03.776: INFO: Pod "pod-subpath-test-projected-rgbc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.176998ms
May 25 11:49:05.895: INFO: Pod "pod-subpath-test-projected-rgbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124388015s
May 25 11:49:07.930: INFO: Pod "pod-subpath-test-projected-rgbc": Phase="Running", Reason="", readiness=true. Elapsed: 4.159407858s
May 25 11:49:09.934: INFO: Pod "pod-subpath-test-projected-rgbc": Phase="Running", Reason="", readiness=true. Elapsed: 6.16390212s
May 25 11:49:11.939: INFO: Pod "pod-subpath-test-projected-rgbc": Phase="Running", Reason="", readiness=true. Elapsed: 8.168842646s
May 25 11:49:13.943: INFO: Pod "pod-subpath-test-projected-rgbc": Phase="Running", Reason="", readiness=true. Elapsed: 10.173140116s
May 25 11:49:15.948: INFO: Pod "pod-subpath-test-projected-rgbc": Phase="Running", Reason="", readiness=true. Elapsed: 12.177574701s
May 25 11:49:17.952: INFO: Pod "pod-subpath-test-projected-rgbc": Phase="Running", Reason="", readiness=true. Elapsed: 14.182046384s
May 25 11:49:19.956: INFO: Pod "pod-subpath-test-projected-rgbc": Phase="Running", Reason="", readiness=true. Elapsed: 16.185886182s
May 25 11:49:21.961: INFO: Pod "pod-subpath-test-projected-rgbc": Phase="Running", Reason="", readiness=true. Elapsed: 18.190713264s
May 25 11:49:23.965: INFO: Pod "pod-subpath-test-projected-rgbc": Phase="Running", Reason="", readiness=true. Elapsed: 20.195094547s
May 25 11:49:25.970: INFO: Pod "pod-subpath-test-projected-rgbc": Phase="Running", Reason="", readiness=true. Elapsed: 22.199931258s
May 25 11:49:27.975: INFO: Pod "pod-subpath-test-projected-rgbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.204795978s
STEP: Saw pod success
May 25 11:49:27.975: INFO: Pod "pod-subpath-test-projected-rgbc" satisfied condition "Succeeded or Failed"
May 25 11:49:27.979: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-projected-rgbc container test-container-subpath-projected-rgbc: 
STEP: delete the pod
May 25 11:49:28.004: INFO: Waiting for pod pod-subpath-test-projected-rgbc to disappear
May 25 11:49:28.026: INFO: Pod pod-subpath-test-projected-rgbc no longer exists
STEP: Deleting pod pod-subpath-test-projected-rgbc
May 25 11:49:28.026: INFO: Deleting pod "pod-subpath-test-projected-rgbc" in namespace "subpath-2538"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:49:28.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2538" for this suite.

• [SLOW TEST:24.455 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":181,"skipped":2962,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:49:28.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
May 25 11:49:28.076: INFO: >>> kubeConfig: /root/.kube/config
May 25 11:49:30.042: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:49:40.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9105" for this suite.

• [SLOW TEST:12.633 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":182,"skipped":2985,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:49:40.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
May 25 11:49:45.276: INFO: Successfully updated pod "pod-update-a8310080-b9f7-4a70-b2c5-1b254ac4a151"
STEP: verifying the updated pod is in kubernetes
May 25 11:49:45.307: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:49:45.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-225" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":3009,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:49:45.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-3ed8c6a1-91e8-44fa-927f-d70d47dac1dc
STEP: Creating a pod to test consume configMaps
May 25 11:49:45.406: INFO: Waiting up to 5m0s for pod "pod-configmaps-551904da-feb2-4d7b-9ece-b0cdd2db9716" in namespace "configmap-5836" to be "Succeeded or Failed"
May 25 11:49:45.446: INFO: Pod "pod-configmaps-551904da-feb2-4d7b-9ece-b0cdd2db9716": Phase="Pending", Reason="", readiness=false. Elapsed: 39.94014ms
May 25 11:49:47.451: INFO: Pod "pod-configmaps-551904da-feb2-4d7b-9ece-b0cdd2db9716": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044085305s
May 25 11:49:49.455: INFO: Pod "pod-configmaps-551904da-feb2-4d7b-9ece-b0cdd2db9716": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04861433s
STEP: Saw pod success
May 25 11:49:49.455: INFO: Pod "pod-configmaps-551904da-feb2-4d7b-9ece-b0cdd2db9716" satisfied condition "Succeeded or Failed"
May 25 11:49:49.458: INFO: Trying to get logs from node kali-worker pod pod-configmaps-551904da-feb2-4d7b-9ece-b0cdd2db9716 container configmap-volume-test: 
STEP: delete the pod
May 25 11:49:49.510: INFO: Waiting for pod pod-configmaps-551904da-feb2-4d7b-9ece-b0cdd2db9716 to disappear
May 25 11:49:49.517: INFO: Pod pod-configmaps-551904da-feb2-4d7b-9ece-b0cdd2db9716 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:49:49.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5836" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":184,"skipped":3037,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:49:49.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May 25 11:49:49.600: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May 25 11:49:49.614: INFO: Waiting for terminating namespaces to be deleted...
May 25 11:49:49.617: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May 25 11:49:49.623: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 11:49:49.623: INFO: 	Container kindnet-cni ready: true, restart count 1
May 25 11:49:49.623: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 11:49:49.623: INFO: 	Container kube-proxy ready: true, restart count 0
May 25 11:49:49.623: INFO: pod-update-a8310080-b9f7-4a70-b2c5-1b254ac4a151 from pods-225 started at 2020-05-25 11:49:40 +0000 UTC (1 container statuses recorded)
May 25 11:49:49.623: INFO: 	Container nginx ready: true, restart count 0
May 25 11:49:49.623: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May 25 11:49:49.644: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 11:49:49.644: INFO: 	Container kindnet-cni ready: true, restart count 0
May 25 11:49:49.644: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 11:49:49.644: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-f4dfa62e-5eeb-49fa-b4d8-53518058a5b0 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-f4dfa62e-5eeb-49fa-b4d8-53518058a5b0 off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-f4dfa62e-5eeb-49fa-b4d8-53518058a5b0
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:50:00.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4831" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:11.386 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":275,"completed":185,"skipped":3060,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:50:00.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:50:01.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4606" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":186,"skipped":3062,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:50:01.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:50:01.688: INFO: The status of Pod test-webserver-b5f00181-1e68-4756-a400-1bb777e640dc is Pending, waiting for it to be Running (with Ready = true)
May 25 11:50:03.692: INFO: The status of Pod test-webserver-b5f00181-1e68-4756-a400-1bb777e640dc is Pending, waiting for it to be Running (with Ready = true)
May 25 11:50:05.715: INFO: The status of Pod test-webserver-b5f00181-1e68-4756-a400-1bb777e640dc is Pending, waiting for it to be Running (with Ready = true)
May 25 11:50:07.691: INFO: The status of Pod test-webserver-b5f00181-1e68-4756-a400-1bb777e640dc is Running (Ready = false)
May 25 11:50:09.691: INFO: The status of Pod test-webserver-b5f00181-1e68-4756-a400-1bb777e640dc is Running (Ready = false)
May 25 11:50:11.692: INFO: The status of Pod test-webserver-b5f00181-1e68-4756-a400-1bb777e640dc is Running (Ready = false)
May 25 11:50:13.692: INFO: The status of Pod test-webserver-b5f00181-1e68-4756-a400-1bb777e640dc is Running (Ready = false)
May 25 11:50:15.692: INFO: The status of Pod test-webserver-b5f00181-1e68-4756-a400-1bb777e640dc is Running (Ready = false)
May 25 11:50:17.692: INFO: The status of Pod test-webserver-b5f00181-1e68-4756-a400-1bb777e640dc is Running (Ready = false)
May 25 11:50:19.692: INFO: The status of Pod test-webserver-b5f00181-1e68-4756-a400-1bb777e640dc is Running (Ready = false)
May 25 11:50:21.692: INFO: The status of Pod test-webserver-b5f00181-1e68-4756-a400-1bb777e640dc is Running (Ready = true)
May 25 11:50:21.695: INFO: Container started at 2020-05-25 11:50:04 +0000 UTC, pod became ready at 2020-05-25 11:50:20 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:50:21.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4808" for this suite.

• [SLOW TEST:20.345 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3077,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:50:21.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-7c1c49fc-625e-4c08-86d8-d8e89b279928
STEP: Creating a pod to test consume configMaps
May 25 11:50:21.899: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b37db220-c976-45d4-bb0f-2eaf911f94e9" in namespace "projected-6950" to be "Succeeded or Failed"
May 25 11:50:21.928: INFO: Pod "pod-projected-configmaps-b37db220-c976-45d4-bb0f-2eaf911f94e9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.686422ms
May 25 11:50:24.020: INFO: Pod "pod-projected-configmaps-b37db220-c976-45d4-bb0f-2eaf911f94e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120967117s
May 25 11:50:26.025: INFO: Pod "pod-projected-configmaps-b37db220-c976-45d4-bb0f-2eaf911f94e9": Phase="Running", Reason="", readiness=true. Elapsed: 4.126002975s
May 25 11:50:28.104: INFO: Pod "pod-projected-configmaps-b37db220-c976-45d4-bb0f-2eaf911f94e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.204540144s
STEP: Saw pod success
May 25 11:50:28.104: INFO: Pod "pod-projected-configmaps-b37db220-c976-45d4-bb0f-2eaf911f94e9" satisfied condition "Succeeded or Failed"
May 25 11:50:28.107: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-b37db220-c976-45d4-bb0f-2eaf911f94e9 container projected-configmap-volume-test: 
STEP: delete the pod
May 25 11:50:28.175: INFO: Waiting for pod pod-projected-configmaps-b37db220-c976-45d4-bb0f-2eaf911f94e9 to disappear
May 25 11:50:28.202: INFO: Pod pod-projected-configmaps-b37db220-c976-45d4-bb0f-2eaf911f94e9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:50:28.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6950" for this suite.

• [SLOW TEST:6.530 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":188,"skipped":3095,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:50:28.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name projected-secret-test-b7ac0ec9-26ec-432f-82d3-aca4d68a4a40
STEP: Creating a pod to test consume secrets
May 25 11:50:28.520: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e2ea3842-1ac0-4635-a69c-1ae7166fe876" in namespace "projected-4502" to be "Succeeded or Failed"
May 25 11:50:28.532: INFO: Pod "pod-projected-secrets-e2ea3842-1ac0-4635-a69c-1ae7166fe876": Phase="Pending", Reason="", readiness=false. Elapsed: 12.275492ms
May 25 11:50:30.544: INFO: Pod "pod-projected-secrets-e2ea3842-1ac0-4635-a69c-1ae7166fe876": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023921495s
May 25 11:50:32.548: INFO: Pod "pod-projected-secrets-e2ea3842-1ac0-4635-a69c-1ae7166fe876": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028224637s
STEP: Saw pod success
May 25 11:50:32.548: INFO: Pod "pod-projected-secrets-e2ea3842-1ac0-4635-a69c-1ae7166fe876" satisfied condition "Succeeded or Failed"
May 25 11:50:32.550: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-e2ea3842-1ac0-4635-a69c-1ae7166fe876 container secret-volume-test: 
STEP: delete the pod
May 25 11:50:32.587: INFO: Waiting for pod pod-projected-secrets-e2ea3842-1ac0-4635-a69c-1ae7166fe876 to disappear
May 25 11:50:32.591: INFO: Pod pod-projected-secrets-e2ea3842-1ac0-4635-a69c-1ae7166fe876 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:50:32.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4502" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":189,"skipped":3113,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:50:32.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-downwardapi-ktqt
STEP: Creating a pod to test atomic-volume-subpath
May 25 11:50:32.774: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ktqt" in namespace "subpath-4795" to be "Succeeded or Failed"
May 25 11:50:32.778: INFO: Pod "pod-subpath-test-downwardapi-ktqt": Phase="Pending", Reason="", readiness=false. Elapsed: 3.417387ms
May 25 11:50:34.822: INFO: Pod "pod-subpath-test-downwardapi-ktqt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047618928s
May 25 11:50:36.826: INFO: Pod "pod-subpath-test-downwardapi-ktqt": Phase="Running", Reason="", readiness=true. Elapsed: 4.051696245s
May 25 11:50:38.830: INFO: Pod "pod-subpath-test-downwardapi-ktqt": Phase="Running", Reason="", readiness=true. Elapsed: 6.055989751s
May 25 11:50:40.835: INFO: Pod "pod-subpath-test-downwardapi-ktqt": Phase="Running", Reason="", readiness=true. Elapsed: 8.060295977s
May 25 11:50:42.845: INFO: Pod "pod-subpath-test-downwardapi-ktqt": Phase="Running", Reason="", readiness=true. Elapsed: 10.071005557s
May 25 11:50:44.857: INFO: Pod "pod-subpath-test-downwardapi-ktqt": Phase="Running", Reason="", readiness=true. Elapsed: 12.082356256s
May 25 11:50:46.862: INFO: Pod "pod-subpath-test-downwardapi-ktqt": Phase="Running", Reason="", readiness=true. Elapsed: 14.087339639s
May 25 11:50:48.866: INFO: Pod "pod-subpath-test-downwardapi-ktqt": Phase="Running", Reason="", readiness=true. Elapsed: 16.091363959s
May 25 11:50:50.870: INFO: Pod "pod-subpath-test-downwardapi-ktqt": Phase="Running", Reason="", readiness=true. Elapsed: 18.095647163s
May 25 11:50:52.875: INFO: Pod "pod-subpath-test-downwardapi-ktqt": Phase="Running", Reason="", readiness=true. Elapsed: 20.100337382s
May 25 11:50:54.879: INFO: Pod "pod-subpath-test-downwardapi-ktqt": Phase="Running", Reason="", readiness=true. Elapsed: 22.104343847s
May 25 11:50:56.883: INFO: Pod "pod-subpath-test-downwardapi-ktqt": Phase="Running", Reason="", readiness=true. Elapsed: 24.10831453s
May 25 11:50:58.888: INFO: Pod "pod-subpath-test-downwardapi-ktqt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.113147747s
STEP: Saw pod success
May 25 11:50:58.888: INFO: Pod "pod-subpath-test-downwardapi-ktqt" satisfied condition "Succeeded or Failed"
May 25 11:50:58.891: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-downwardapi-ktqt container test-container-subpath-downwardapi-ktqt: 
STEP: delete the pod
May 25 11:50:58.949: INFO: Waiting for pod pod-subpath-test-downwardapi-ktqt to disappear
May 25 11:50:58.963: INFO: Pod pod-subpath-test-downwardapi-ktqt no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-ktqt
May 25 11:50:58.963: INFO: Deleting pod "pod-subpath-test-downwardapi-ktqt" in namespace "subpath-4795"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:50:58.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4795" for this suite.

• [SLOW TEST:26.348 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":190,"skipped":3120,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:50:58.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-80a038ab-7a96-4111-8010-0b973893b1a1
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-80a038ab-7a96-4111-8010-0b973893b1a1
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:51:07.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8410" for this suite.

• [SLOW TEST:8.211 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":191,"skipped":3147,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:51:07.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 25 11:51:07.455: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8be8a659-9154-4e7b-8bb8-b6488529e771" in namespace "projected-3290" to be "Succeeded or Failed"
May 25 11:51:07.501: INFO: Pod "downwardapi-volume-8be8a659-9154-4e7b-8bb8-b6488529e771": Phase="Pending", Reason="", readiness=false. Elapsed: 45.3047ms
May 25 11:51:09.506: INFO: Pod "downwardapi-volume-8be8a659-9154-4e7b-8bb8-b6488529e771": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050444624s
May 25 11:51:11.510: INFO: Pod "downwardapi-volume-8be8a659-9154-4e7b-8bb8-b6488529e771": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054380954s
STEP: Saw pod success
May 25 11:51:11.510: INFO: Pod "downwardapi-volume-8be8a659-9154-4e7b-8bb8-b6488529e771" satisfied condition "Succeeded or Failed"
May 25 11:51:11.512: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-8be8a659-9154-4e7b-8bb8-b6488529e771 container client-container: 
STEP: delete the pod
May 25 11:51:11.735: INFO: Waiting for pod downwardapi-volume-8be8a659-9154-4e7b-8bb8-b6488529e771 to disappear
May 25 11:51:11.743: INFO: Pod downwardapi-volume-8be8a659-9154-4e7b-8bb8-b6488529e771 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:51:11.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3290" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3185,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:51:11.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
May 25 11:51:11.866: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:51:11.868: INFO: Number of nodes with available pods: 0
May 25 11:51:11.868: INFO: Node kali-worker is running more than one daemon pod
May 25 11:51:12.874: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:51:12.878: INFO: Number of nodes with available pods: 0
May 25 11:51:12.878: INFO: Node kali-worker is running more than one daemon pod
May 25 11:51:13.938: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:51:13.943: INFO: Number of nodes with available pods: 0
May 25 11:51:13.943: INFO: Node kali-worker is running more than one daemon pod
May 25 11:51:14.903: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:51:14.906: INFO: Number of nodes with available pods: 0
May 25 11:51:14.907: INFO: Node kali-worker is running more than one daemon pod
May 25 11:51:15.873: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:51:15.876: INFO: Number of nodes with available pods: 0
May 25 11:51:15.876: INFO: Node kali-worker is running more than one daemon pod
May 25 11:51:16.884: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:51:16.888: INFO: Number of nodes with available pods: 2
May 25 11:51:16.888: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
May 25 11:51:16.936: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:51:17.046: INFO: Number of nodes with available pods: 1
May 25 11:51:17.046: INFO: Node kali-worker is running more than one daemon pod
May 25 11:51:18.051: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:51:18.053: INFO: Number of nodes with available pods: 1
May 25 11:51:18.053: INFO: Node kali-worker is running more than one daemon pod
May 25 11:51:19.052: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:51:19.056: INFO: Number of nodes with available pods: 1
May 25 11:51:19.056: INFO: Node kali-worker is running more than one daemon pod
May 25 11:51:20.052: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:51:20.056: INFO: Number of nodes with available pods: 1
May 25 11:51:20.056: INFO: Node kali-worker is running more than one daemon pod
May 25 11:51:21.052: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:51:21.056: INFO: Number of nodes with available pods: 2
May 25 11:51:21.056: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4803, will wait for the garbage collector to delete the pods
May 25 11:51:21.122: INFO: Deleting DaemonSet.extensions daemon-set took: 6.726799ms
May 25 11:51:21.422: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.323507ms
May 25 11:51:33.744: INFO: Number of nodes with available pods: 0
May 25 11:51:33.744: INFO: Number of running nodes: 0, number of available pods: 0
May 25 11:51:33.746: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4803/daemonsets","resourceVersion":"7183290"},"items":null}

May 25 11:51:33.749: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4803/pods","resourceVersion":"7183290"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:51:33.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4803" for this suite.

• [SLOW TEST:22.015 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":193,"skipped":3195,"failed":0}
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:51:33.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-3183
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-3183
STEP: Creating statefulset with conflicting port in namespace statefulset-3183
STEP: Waiting until pod test-pod will start running in namespace statefulset-3183
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3183
May 25 11:51:37.969: INFO: Observed stateful pod in namespace: statefulset-3183, name: ss-0, uid: 2dfc93e1-cf4a-4fd7-93ba-7c4e23acc34f, status phase: Pending. Waiting for statefulset controller to delete.
May 25 11:51:38.297: INFO: Observed stateful pod in namespace: statefulset-3183, name: ss-0, uid: 2dfc93e1-cf4a-4fd7-93ba-7c4e23acc34f, status phase: Failed. Waiting for statefulset controller to delete.
May 25 11:51:38.394: INFO: Observed stateful pod in namespace: statefulset-3183, name: ss-0, uid: 2dfc93e1-cf4a-4fd7-93ba-7c4e23acc34f, status phase: Failed. Waiting for statefulset controller to delete.
May 25 11:51:38.437: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3183
STEP: Removing pod with conflicting port in namespace statefulset-3183
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3183 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May 25 11:51:44.568: INFO: Deleting all statefulset in ns statefulset-3183
May 25 11:51:44.571: INFO: Scaling statefulset ss to 0
May 25 11:51:54.594: INFO: Waiting for statefulset status.replicas updated to 0
May 25 11:51:54.597: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:51:54.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3183" for this suite.

• [SLOW TEST:20.850 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":194,"skipped":3195,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:51:54.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Starting the proxy
May 25 11:51:54.718: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix418615839/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:51:54.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3914" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":275,"completed":195,"skipped":3221,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:51:54.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5203
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-5203
STEP: creating replication controller externalsvc in namespace services-5203
I0525 11:51:55.093645       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5203, replica count: 2
I0525 11:51:58.144122       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0525 11:52:01.144376       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
May 25 11:52:01.243: INFO: Creating new exec pod
May 25 11:52:05.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5203 execpodq5m8p -- /bin/sh -x -c nslookup clusterip-service'
May 25 11:52:05.491: INFO: stderr: "I0525 11:52:05.392939    2566 log.go:172] (0xc00043c0b0) (0xc000677360) Create stream\nI0525 11:52:05.392992    2566 log.go:172] (0xc00043c0b0) (0xc000677360) Stream added, broadcasting: 1\nI0525 11:52:05.401934    2566 log.go:172] (0xc00043c0b0) Reply frame received for 1\nI0525 11:52:05.401981    2566 log.go:172] (0xc00043c0b0) (0xc0008ba000) Create stream\nI0525 11:52:05.401993    2566 log.go:172] (0xc00043c0b0) (0xc0008ba000) Stream added, broadcasting: 3\nI0525 11:52:05.407558    2566 log.go:172] (0xc00043c0b0) Reply frame received for 3\nI0525 11:52:05.407590    2566 log.go:172] (0xc00043c0b0) (0xc0008ba0a0) Create stream\nI0525 11:52:05.407599    2566 log.go:172] (0xc00043c0b0) (0xc0008ba0a0) Stream added, broadcasting: 5\nI0525 11:52:05.409370    2566 log.go:172] (0xc00043c0b0) Reply frame received for 5\nI0525 11:52:05.451185    2566 log.go:172] (0xc00043c0b0) Data frame received for 5\nI0525 11:52:05.451210    2566 log.go:172] (0xc0008ba0a0) (5) Data frame handling\nI0525 11:52:05.451224    2566 log.go:172] (0xc0008ba0a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0525 11:52:05.483059    2566 log.go:172] (0xc00043c0b0) Data frame received for 3\nI0525 11:52:05.483085    2566 log.go:172] (0xc0008ba000) (3) Data frame handling\nI0525 11:52:05.483101    2566 log.go:172] (0xc0008ba000) (3) Data frame sent\nI0525 11:52:05.483699    2566 log.go:172] (0xc00043c0b0) Data frame received for 3\nI0525 11:52:05.483713    2566 log.go:172] (0xc0008ba000) (3) Data frame handling\nI0525 11:52:05.483722    2566 log.go:172] (0xc0008ba000) (3) Data frame sent\nI0525 11:52:05.484328    2566 log.go:172] (0xc00043c0b0) Data frame received for 3\nI0525 11:52:05.484363    2566 log.go:172] (0xc0008ba000) (3) Data frame handling\nI0525 11:52:05.484403    2566 log.go:172] (0xc00043c0b0) Data frame received for 5\nI0525 11:52:05.484424    2566 log.go:172] (0xc0008ba0a0) (5) Data frame handling\nI0525 11:52:05.486252    2566 log.go:172] (0xc00043c0b0) Data frame received for 1\nI0525 11:52:05.486282    2566 log.go:172] (0xc000677360) (1) Data frame handling\nI0525 11:52:05.486292    2566 log.go:172] (0xc000677360) (1) Data frame sent\nI0525 11:52:05.486305    2566 log.go:172] (0xc00043c0b0) (0xc000677360) Stream removed, broadcasting: 1\nI0525 11:52:05.486318    2566 log.go:172] (0xc00043c0b0) Go away received\nI0525 11:52:05.486833    2566 log.go:172] (0xc00043c0b0) (0xc000677360) Stream removed, broadcasting: 1\nI0525 11:52:05.486853    2566 log.go:172] (0xc00043c0b0) (0xc0008ba000) Stream removed, broadcasting: 3\nI0525 11:52:05.486863    2566 log.go:172] (0xc00043c0b0) (0xc0008ba0a0) Stream removed, broadcasting: 5\n"
May 25 11:52:05.492: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5203.svc.cluster.local\tcanonical name = externalsvc.services-5203.svc.cluster.local.\nName:\texternalsvc.services-5203.svc.cluster.local\nAddress: 10.99.25.192\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-5203, will wait for the garbage collector to delete the pods
May 25 11:52:05.552: INFO: Deleting ReplicationController externalsvc took: 6.612637ms
May 25 11:52:05.652: INFO: Terminating ReplicationController externalsvc pods took: 100.242354ms
May 25 11:52:10.663: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:52:10.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5203" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:15.881 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":196,"skipped":3240,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:52:10.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-526, will wait for the garbage collector to delete the pods
May 25 11:52:16.895: INFO: Deleting Job.batch foo took: 5.86116ms
May 25 11:52:16.995: INFO: Terminating Job.batch foo pods took: 100.285537ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:53:04.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-526" for this suite.

• [SLOW TEST:54.803 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":197,"skipped":3289,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:53:05.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
May 25 11:53:07.103: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:07.254: INFO: Number of nodes with available pods: 0
May 25 11:53:07.254: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:08.261: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:08.264: INFO: Number of nodes with available pods: 0
May 25 11:53:08.264: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:09.856: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:09.859: INFO: Number of nodes with available pods: 0
May 25 11:53:09.859: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:10.310: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:10.314: INFO: Number of nodes with available pods: 0
May 25 11:53:10.314: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:11.268: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:11.618: INFO: Number of nodes with available pods: 0
May 25 11:53:11.618: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:12.454: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:12.723: INFO: Number of nodes with available pods: 0
May 25 11:53:12.723: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:13.271: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:13.528: INFO: Number of nodes with available pods: 0
May 25 11:53:13.528: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:14.498: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:14.500: INFO: Number of nodes with available pods: 0
May 25 11:53:14.500: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:15.714: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:16.172: INFO: Number of nodes with available pods: 0
May 25 11:53:16.172: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:16.664: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:17.217: INFO: Number of nodes with available pods: 1
May 25 11:53:17.217: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:17.585: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:17.588: INFO: Number of nodes with available pods: 1
May 25 11:53:17.588: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:18.346: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:18.350: INFO: Number of nodes with available pods: 2
May 25 11:53:18.350: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
May 25 11:53:18.496: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:18.499: INFO: Number of nodes with available pods: 1
May 25 11:53:18.499: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:19.538: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:19.542: INFO: Number of nodes with available pods: 1
May 25 11:53:19.542: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:20.504: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:20.508: INFO: Number of nodes with available pods: 1
May 25 11:53:20.508: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:21.516: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:21.533: INFO: Number of nodes with available pods: 1
May 25 11:53:21.533: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:22.511: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:22.513: INFO: Number of nodes with available pods: 1
May 25 11:53:22.513: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:23.551: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:23.556: INFO: Number of nodes with available pods: 1
May 25 11:53:23.556: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:24.659: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:24.662: INFO: Number of nodes with available pods: 1
May 25 11:53:24.662: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:25.505: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:25.509: INFO: Number of nodes with available pods: 1
May 25 11:53:25.509: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:26.711: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:26.715: INFO: Number of nodes with available pods: 1
May 25 11:53:26.715: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:27.504: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:27.507: INFO: Number of nodes with available pods: 1
May 25 11:53:27.507: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:28.574: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:28.631: INFO: Number of nodes with available pods: 1
May 25 11:53:28.631: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:29.505: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:29.509: INFO: Number of nodes with available pods: 1
May 25 11:53:29.509: INFO: Node kali-worker is running more than one daemon pod
May 25 11:53:30.506: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:53:30.509: INFO: Number of nodes with available pods: 2
May 25 11:53:30.509: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9996, will wait for the garbage collector to delete the pods
May 25 11:53:30.571: INFO: Deleting DaemonSet.extensions daemon-set took: 6.430704ms
May 25 11:53:30.971: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.253227ms
May 25 11:53:43.875: INFO: Number of nodes with available pods: 0
May 25 11:53:43.875: INFO: Number of running nodes: 0, number of available pods: 0
May 25 11:53:43.902: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9996/daemonsets","resourceVersion":"7184035"},"items":null}

May 25 11:53:43.904: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9996/pods","resourceVersion":"7184035"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:53:43.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9996" for this suite.

• [SLOW TEST:38.425 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":198,"skipped":3296,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:53:43.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 25 11:53:43.991: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a235b4e-5dbd-49cb-8d1e-48d6e924359c" in namespace "projected-5624" to be "Succeeded or Failed"
May 25 11:53:44.040: INFO: Pod "downwardapi-volume-9a235b4e-5dbd-49cb-8d1e-48d6e924359c": Phase="Pending", Reason="", readiness=false. Elapsed: 48.446016ms
May 25 11:53:46.044: INFO: Pod "downwardapi-volume-9a235b4e-5dbd-49cb-8d1e-48d6e924359c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052834983s
May 25 11:53:48.048: INFO: Pod "downwardapi-volume-9a235b4e-5dbd-49cb-8d1e-48d6e924359c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056595849s
STEP: Saw pod success
May 25 11:53:48.048: INFO: Pod "downwardapi-volume-9a235b4e-5dbd-49cb-8d1e-48d6e924359c" satisfied condition "Succeeded or Failed"
May 25 11:53:48.050: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-9a235b4e-5dbd-49cb-8d1e-48d6e924359c container client-container: 
STEP: delete the pod
May 25 11:53:48.082: INFO: Waiting for pod downwardapi-volume-9a235b4e-5dbd-49cb-8d1e-48d6e924359c to disappear
May 25 11:53:48.098: INFO: Pod downwardapi-volume-9a235b4e-5dbd-49cb-8d1e-48d6e924359c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:53:48.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5624" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3312,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:53:48.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 25 11:53:48.207: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8564e09f-e93c-45fe-9a2a-0d636d096c92" in namespace "downward-api-688" to be "Succeeded or Failed"
May 25 11:53:48.228: INFO: Pod "downwardapi-volume-8564e09f-e93c-45fe-9a2a-0d636d096c92": Phase="Pending", Reason="", readiness=false. Elapsed: 21.628521ms
May 25 11:53:50.233: INFO: Pod "downwardapi-volume-8564e09f-e93c-45fe-9a2a-0d636d096c92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026536661s
May 25 11:53:52.241: INFO: Pod "downwardapi-volume-8564e09f-e93c-45fe-9a2a-0d636d096c92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033952774s
STEP: Saw pod success
May 25 11:53:52.241: INFO: Pod "downwardapi-volume-8564e09f-e93c-45fe-9a2a-0d636d096c92" satisfied condition "Succeeded or Failed"
May 25 11:53:52.246: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-8564e09f-e93c-45fe-9a2a-0d636d096c92 container client-container: 
STEP: delete the pod
May 25 11:53:52.275: INFO: Waiting for pod downwardapi-volume-8564e09f-e93c-45fe-9a2a-0d636d096c92 to disappear
May 25 11:53:52.303: INFO: Pod downwardapi-volume-8564e09f-e93c-45fe-9a2a-0d636d096c92 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:53:52.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-688" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3338,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:53:52.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:54:24.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6063" for this suite.

• [SLOW TEST:32.506 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":201,"skipped":3389,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:54:24.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:54:24.959: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
May 25 11:54:24.967: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:25.010: INFO: Number of nodes with available pods: 0
May 25 11:54:25.010: INFO: Node kali-worker is running more than one daemon pod
May 25 11:54:26.015: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:26.019: INFO: Number of nodes with available pods: 0
May 25 11:54:26.019: INFO: Node kali-worker is running more than one daemon pod
May 25 11:54:27.015: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:27.018: INFO: Number of nodes with available pods: 0
May 25 11:54:27.018: INFO: Node kali-worker is running more than one daemon pod
May 25 11:54:28.059: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:28.063: INFO: Number of nodes with available pods: 0
May 25 11:54:28.063: INFO: Node kali-worker is running more than one daemon pod
May 25 11:54:29.015: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:29.019: INFO: Number of nodes with available pods: 1
May 25 11:54:29.019: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:54:30.022: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:30.028: INFO: Number of nodes with available pods: 2
May 25 11:54:30.028: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
May 25 11:54:30.250: INFO: Wrong image for pod: daemon-set-j4szj. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 25 11:54:30.250: INFO: Wrong image for pod: daemon-set-qzwht. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 25 11:54:30.330: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:31.361: INFO: Wrong image for pod: daemon-set-j4szj. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 25 11:54:31.361: INFO: Wrong image for pod: daemon-set-qzwht. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 25 11:54:31.370: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:32.340: INFO: Wrong image for pod: daemon-set-j4szj. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 25 11:54:32.340: INFO: Wrong image for pod: daemon-set-qzwht. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 25 11:54:32.344: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:33.335: INFO: Wrong image for pod: daemon-set-j4szj. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 25 11:54:33.335: INFO: Wrong image for pod: daemon-set-qzwht. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 25 11:54:33.335: INFO: Pod daemon-set-qzwht is not available
May 25 11:54:33.339: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:34.335: INFO: Wrong image for pod: daemon-set-j4szj. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 25 11:54:34.335: INFO: Pod daemon-set-vbspd is not available
May 25 11:54:34.338: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:35.382: INFO: Wrong image for pod: daemon-set-j4szj. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 25 11:54:35.382: INFO: Pod daemon-set-vbspd is not available
May 25 11:54:35.387: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:36.335: INFO: Wrong image for pod: daemon-set-j4szj. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 25 11:54:36.335: INFO: Pod daemon-set-vbspd is not available
May 25 11:54:36.338: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:37.351: INFO: Wrong image for pod: daemon-set-j4szj. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 25 11:54:37.356: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:38.336: INFO: Wrong image for pod: daemon-set-j4szj. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 25 11:54:38.342: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:39.335: INFO: Wrong image for pod: daemon-set-j4szj. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 25 11:54:39.335: INFO: Pod daemon-set-j4szj is not available
May 25 11:54:39.339: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:40.336: INFO: Pod daemon-set-qqxcc is not available
May 25 11:54:40.341: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
May 25 11:54:40.344: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:40.347: INFO: Number of nodes with available pods: 1
May 25 11:54:40.347: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:54:41.352: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:41.355: INFO: Number of nodes with available pods: 1
May 25 11:54:41.355: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:54:42.352: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:42.356: INFO: Number of nodes with available pods: 1
May 25 11:54:42.356: INFO: Node kali-worker2 is running more than one daemon pod
May 25 11:54:43.352: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 25 11:54:43.355: INFO: Number of nodes with available pods: 2
May 25 11:54:43.356: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6432, will wait for the garbage collector to delete the pods
May 25 11:54:43.427: INFO: Deleting DaemonSet.extensions daemon-set took: 6.677993ms
May 25 11:54:43.827: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.28151ms
May 25 11:54:53.931: INFO: Number of nodes with available pods: 0
May 25 11:54:53.931: INFO: Number of running nodes: 0, number of available pods: 0
May 25 11:54:53.933: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6432/daemonsets","resourceVersion":"7184457"},"items":null}

May 25 11:54:53.936: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6432/pods","resourceVersion":"7184457"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:54:54.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6432" for this suite.

• [SLOW TEST:29.299 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":202,"skipped":3396,"failed":0}
SSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:54:54.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
May 25 11:54:54.345: INFO: Pod name pod-release: Found 0 pods out of 1
May 25 11:54:59.357: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:54:59.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5044" for this suite.

• [SLOW TEST:5.619 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":203,"skipped":3400,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:54:59.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:54:59.946: INFO: Creating deployment "webserver-deployment"
May 25 11:55:00.005: INFO: Waiting for observed generation 1
May 25 11:55:02.089: INFO: Waiting for all required pods to come up
May 25 11:55:02.096: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
May 25 11:55:14.292: INFO: Waiting for deployment "webserver-deployment" to complete
May 25 11:55:14.299: INFO: Updating deployment "webserver-deployment" with a non-existent image
May 25 11:55:14.308: INFO: Updating deployment webserver-deployment
May 25 11:55:14.308: INFO: Waiting for observed generation 2
May 25 11:55:16.335: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
May 25 11:55:16.338: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
May 25 11:55:16.340: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
May 25 11:55:16.811: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
May 25 11:55:16.811: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
May 25 11:55:16.812: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
May 25 11:55:16.819: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
May 25 11:55:16.819: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
May 25 11:55:16.825: INFO: Updating deployment webserver-deployment
May 25 11:55:16.825: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
May 25 11:55:16.975: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
May 25 11:55:17.023: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May 25 11:55:17.409: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-222 /apis/apps/v1/namespaces/deployment-222/deployments/webserver-deployment 693e73b8-dec2-413b-bb52-d5fd283b614e 7184817 3 2020-05-25 11:54:59 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-05-25 11:55:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0054c2338  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-25 11:55:15 +0000 UTC,LastTransitionTime:2020-05-25 11:55:00 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-25 11:55:16 +0000 UTC,LastTransitionTime:2020-05-25 11:55:16 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

May 25 11:55:17.524: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4  deployment-222 /apis/apps/v1/namespaces/deployment-222/replicasets/webserver-deployment-6676bcd6d4 d49003eb-6565-4aed-ac8f-aaf4c94a0cef 7184839 3 2020-05-25 11:55:14 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 693e73b8-dec2-413b-bb52-d5fd283b614e 0xc0054c27d7 0xc0054c27d8}] []  [{kube-controller-manager Update apps/v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 51 101 55 51 98 56 45 100 101 99 50 45 52 49 51 98 45 98 98 53 50 45 100 53 102 100 50 56 51 98 54 49 52 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0054c2868  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May 25 11:55:17.524: INFO: All old ReplicaSets of Deployment "webserver-deployment":
May 25 11:55:17.524: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797  deployment-222 /apis/apps/v1/namespaces/deployment-222/replicasets/webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 7184838 3 2020-05-25 11:55:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 693e73b8-dec2-413b-bb52-d5fd283b614e 0xc0054c28c7 0xc0054c28c8}] []  [{kube-controller-manager Update apps/v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 51 101 55 51 98 56 45 100 101 99 50 45 52 49 51 98 45 98 98 53 50 45 100 53 102 100 50 56 51 98 54 49 52 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0054c2938  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
May 25 11:55:17.582: INFO: Pod "webserver-deployment-6676bcd6d4-4xdt7" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4xdt7 webserver-deployment-6676bcd6d4- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-6676bcd6d4-4xdt7 c710abf1-4875-4fc1-9533-802207b6af29 7184763 0 2020-05-25 11:55:14 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d49003eb-6565-4aed-ac8f-aaf4c94a0cef 0xc0054c2ea7 0xc0054c2ea8}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 57 48 48 51 101 98 45 54 53 54 53 45 52 97 101 100 45 97 99 56 102 45 97 97 102 52 99 57 52 97 48 99 101 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 11:55:15 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-25 11:55:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.583: INFO: Pod "webserver-deployment-6676bcd6d4-4xjcn" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4xjcn webserver-deployment-6676bcd6d4- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-6676bcd6d4-4xjcn 57f481bb-e122-4287-b38c-4bdb733cc46f 7184794 0 2020-05-25 11:55:17 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d49003eb-6565-4aed-ac8f-aaf4c94a0cef 0xc0054c3057 0xc0054c3058}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 57 48 48 51 101 98 45 54 53 54 53 45 52 97 101 100 45 97 99 56 102 45 97 97 102 52 99 57 52 97 48 99 101 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.583: INFO: Pod "webserver-deployment-6676bcd6d4-7fwhd" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-7fwhd webserver-deployment-6676bcd6d4- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-6676bcd6d4-7fwhd 951250ab-6dbf-4d97-87ec-aa296a918ae7 7184760 0 2020-05-25 11:55:14 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d49003eb-6565-4aed-ac8f-aaf4c94a0cef 0xc0054c3197 0xc0054c3198}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 57 48 48 51 101 98 45 54 53 54 53 45 52 97 101 100 45 97 99 56 102 45 97 97 102 52 99 57 52 97 48 99 101 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 11:55:15 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-25 11:55:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.583: INFO: Pod "webserver-deployment-6676bcd6d4-84bmz" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-84bmz webserver-deployment-6676bcd6d4- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-6676bcd6d4-84bmz 4222e2d8-07db-4c85-adf6-4e5f689d1267 7184729 0 2020-05-25 11:55:14 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d49003eb-6565-4aed-ac8f-aaf4c94a0cef 0xc0054c3347 0xc0054c3348}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 57 48 48 51 101 98 45 54 53 54 53 45 52 97 101 100 45 97 99 56 102 45 97 97 102 52 99 57 52 97 48 99 101 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 11:55:14 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-25 11:55:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.584: INFO: Pod "webserver-deployment-6676bcd6d4-d9tx4" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-d9tx4 webserver-deployment-6676bcd6d4- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-6676bcd6d4-d9tx4 6dd12700-28c0-4404-9a76-e5887d4bb047 7184741 0 2020-05-25 11:55:14 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d49003eb-6565-4aed-ac8f-aaf4c94a0cef 0xc0054c34f7 0xc0054c34f8}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 57 48 48 51 101 98 45 54 53 54 53 45 52 97 101 100 45 97 99 56 102 45 97 97 102 52 99 57 52 97 48 99 101 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 11:55:14 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-25 11:55:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.584: INFO: Pod "webserver-deployment-6676bcd6d4-f2wx7" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-f2wx7 webserver-deployment-6676bcd6d4- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-6676bcd6d4-f2wx7 69d08192-d383-4e98-9e8e-e95431d41eb0 7184844 0 2020-05-25 11:55:17 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d49003eb-6565-4aed-ac8f-aaf4c94a0cef 0xc0054c36c7 0xc0054c36c8}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 57 48 48 51 101 98 45 54 53 54 53 45 52 97 101 100 45 97 99 56 102 45 97 97 102 52 99 57 52 97 48 99 101 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.585: INFO: Pod "webserver-deployment-6676bcd6d4-g8vp8" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-g8vp8 webserver-deployment-6676bcd6d4- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-6676bcd6d4-g8vp8 4a219ce2-67b5-4cdb-917f-cd2bb980c175 7184828 0 2020-05-25 11:55:17 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d49003eb-6565-4aed-ac8f-aaf4c94a0cef 0xc0054c3817 0xc0054c3818}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 57 48 48 51 101 98 45 54 53 54 53 45 52 97 101 100 45 97 99 56 102 45 97 97 102 52 99 57 52 97 48 99 101 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.585: INFO: Pod "webserver-deployment-6676bcd6d4-hhvqg" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-hhvqg webserver-deployment-6676bcd6d4- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-6676bcd6d4-hhvqg 7429f777-8d5d-414c-b120-c65e77c545be 7184822 0 2020-05-25 11:55:17 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d49003eb-6565-4aed-ac8f-aaf4c94a0cef 0xc0054c3957 0xc0054c3958}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 57 48 48 51 101 98 45 54 53 54 53 45 52 97 101 100 45 97 99 56 102 45 97 97 102 52 99 57 52 97 48 99 101 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.585: INFO: Pod "webserver-deployment-6676bcd6d4-js59t" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-js59t webserver-deployment-6676bcd6d4- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-6676bcd6d4-js59t 32bccf14-c677-4c04-9f16-5b9c9ec9b17b 7184744 0 2020-05-25 11:55:14 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d49003eb-6565-4aed-ac8f-aaf4c94a0cef 0xc0054c3a97 0xc0054c3a98}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 57 48 48 51 101 98 45 54 53 54 53 45 52 97 101 100 45 97 99 56 102 45 97 97 102 52 99 57 52 97 48 99 101 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 11:55:14 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-25 11:55:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.586: INFO: Pod "webserver-deployment-6676bcd6d4-lz8f2" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-lz8f2 webserver-deployment-6676bcd6d4- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-6676bcd6d4-lz8f2 65c2ae92-4163-4877-8842-b4ed1fa55ffa 7184815 0 2020-05-25 11:55:17 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d49003eb-6565-4aed-ac8f-aaf4c94a0cef 0xc0054c3c57 0xc0054c3c58}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 57 48 48 51 101 98 45 54 53 54 53 45 52 97 101 100 45 97 99 56 102 45 97 97 102 52 99 57 52 97 48 99 101 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.586: INFO: Pod "webserver-deployment-6676bcd6d4-mrp9z" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-mrp9z webserver-deployment-6676bcd6d4- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-6676bcd6d4-mrp9z b7f838e5-000b-4f9e-aaed-35fc7c626147 7184798 0 2020-05-25 11:55:17 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d49003eb-6565-4aed-ac8f-aaf4c94a0cef 0xc0054c3db7 0xc0054c3db8}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 57 48 48 51 101 98 45 54 53 54 53 45 52 97 101 100 45 97 99 56 102 45 97 97 102 52 99 57 52 97 48 99 101 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.586: INFO: Pod "webserver-deployment-6676bcd6d4-qk665" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qk665 webserver-deployment-6676bcd6d4- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-6676bcd6d4-qk665 bc2d34cf-8e7d-4c47-9e85-0dc9524a4ee4 7184819 0 2020-05-25 11:55:17 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d49003eb-6565-4aed-ac8f-aaf4c94a0cef 0xc0054c3ef7 0xc0054c3ef8}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 57 48 48 51 101 98 45 54 53 54 53 45 52 97 101 100 45 97 99 56 102 45 97 97 102 52 99 57 52 97 48 99 101 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.586: INFO: Pod "webserver-deployment-6676bcd6d4-w9hbh" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-w9hbh webserver-deployment-6676bcd6d4- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-6676bcd6d4-w9hbh f058d187-053e-4b3f-86e8-3ebf34048fad 7184786 0 2020-05-25 11:55:16 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d49003eb-6565-4aed-ac8f-aaf4c94a0cef 0xc005770037 0xc005770038}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 57 48 48 51 101 98 45 54 53 54 53 45 52 97 101 100 45 97 99 56 102 45 97 97 102 52 99 57 52 97 48 99 101 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.587: INFO: Pod "webserver-deployment-84855cf797-2sg42" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-2sg42 webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-2sg42 5943c696-f211-4642-9dbe-c6a04897965f 7184687 0 2020-05-25 11:55:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc005770187 0xc005770188}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 11:55:11 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 48 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.200,StartTime:2020-05-25 11:55:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 11:55:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e8699ea50612c3d0c5aeeb55563152dfa364c6d3487f3ee45297a5cbaff1cc02,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.200,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.587: INFO: Pod "webserver-deployment-84855cf797-48mtv" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-48mtv webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-48mtv ea4b58be-d095-451b-be2b-4394726f251b 7184812 0 2020-05-25 11:55:17 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc005770337 0xc005770338}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.587: INFO: Pod "webserver-deployment-84855cf797-7rjdp" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-7rjdp webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-7rjdp 645479ba-a4ae-4e20-93ed-af9bb6b25ec2 7184640 0 2020-05-25 11:55:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc005770467 0xc005770468}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 11:55:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 48 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.207,StartTime:2020-05-25 11:55:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 11:55:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3aa1286ce390cbdc8a4cfedd2bc5000b33dd7594f40c2f166d0f3620af4470a7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.207,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.588: INFO: Pod "webserver-deployment-84855cf797-7z769" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-7z769 webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-7z769 4e37ef0f-24dc-47cf-ad0d-fa856d21ca88 7184659 0 2020-05-25 11:55:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc005770617 0xc005770618}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 11:55:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 57 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.197,StartTime:2020-05-25 11:55:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 11:55:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4dc6322caa0bd82092dd7e347c2c433118fece1584f2551d340ce0eb37027b61,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.197,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.588: INFO: Pod "webserver-deployment-84855cf797-89rpd" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-89rpd webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-89rpd d53adb48-f8a7-4fc5-b4ab-39b702c51b01 7184678 0 2020-05-25 11:55:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc0057707c7 0xc0057707c8}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 11:55:11 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 48 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.208,StartTime:2020-05-25 11:55:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 11:55:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://30448c831bb76858be032b64bb797ff04872fcfd1959b8f33c1117367e503b90,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.208,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.588: INFO: Pod "webserver-deployment-84855cf797-89zpl" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-89zpl webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-89zpl 7b1c4c68-4b0c-45c9-9bc1-20b8e2be946b 7184808 0 2020-05-25 11:55:17 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc005770977 0xc005770978}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.589: INFO: Pod "webserver-deployment-84855cf797-f265c" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-f265c webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-f265c ff80d8d1-3027-4b4c-b510-c90719dd153e 7184671 0 2020-05-25 11:55:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc005770aa7 0xc005770aa8}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 11:55:11 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 49 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.210,StartTime:2020-05-25 11:55:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 11:55:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cdf5a489b69cb9017c01772e1f9ae5408b0fdca0b9aa318e27909f93784c3d93,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.210,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.589: INFO: Pod "webserver-deployment-84855cf797-gbw9p" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-gbw9p webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-gbw9p 4559149a-f2e8-4d02-9161-3138635c4116 7184843 0 2020-05-25 11:55:16 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc005770c57 0xc005770c58}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-25 11:55:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.589: INFO: Pod "webserver-deployment-84855cf797-kbrv6" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-kbrv6 webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-kbrv6 8b34654d-2ccf-4973-9b85-b4b2dae38a79 7184820 0 2020-05-25 11:55:17 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc005770de7 0xc005770de8}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.590: INFO: Pod "webserver-deployment-84855cf797-qmqsq" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-qmqsq webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-qmqsq 98c7c918-a2d4-4fb1-a3a1-cce3e43cdab2 7184803 0 2020-05-25 11:55:17 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc005770f17 0xc005770f18}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.590: INFO: Pod "webserver-deployment-84855cf797-qrv2d" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-qrv2d webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-qrv2d 4bc5c109-b952-433d-ae2a-b743d34664d0 7184690 0 2020-05-25 11:55:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc005771047 0xc005771048}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 11:55:11 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 57 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.199,StartTime:2020-05-25 11:55:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 11:55:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d80d2a7db198d79604aa7543f763c383897a167c5fcfa6ca0193612b2d2ac556,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.199,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.590: INFO: Pod "webserver-deployment-84855cf797-szpww" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-szpww webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-szpww 5e0fc976-e2f9-4872-b325-1e0648d4fd87 7184816 0 2020-05-25 11:55:17 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc0057711f7 0xc0057711f8}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.590: INFO: Pod "webserver-deployment-84855cf797-t69hp" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-t69hp webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-t69hp 16c1b4c7-b937-45a6-84c7-5044070e89e3 7184666 0 2020-05-25 11:55:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc005771327 0xc005771328}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 11:55:11 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 48 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.209,StartTime:2020-05-25 11:55:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 11:55:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8ee2bd962eedeec8286ea087fce9a445cb4095e27d1f4e710fe1d8be9a9ec624,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.209,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.591: INFO: Pod "webserver-deployment-84855cf797-tj9xd" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-tj9xd webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-tj9xd 8bb1faf2-14ae-4d9d-93e4-a147e4c1eb6e 7184827 0 2020-05-25 11:55:17 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc0057714d7 0xc0057714d8}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.591: INFO: Pod "webserver-deployment-84855cf797-v4fvt" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-v4fvt webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-v4fvt 55aa58a0-a01f-462a-bbb5-d56e314cefcb 7184807 0 2020-05-25 11:55:17 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc005771607 0xc005771608}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.591: INFO: Pod "webserver-deployment-84855cf797-wzz6f" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-wzz6f webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-wzz6f 9441dff3-c1ff-4790-9238-275f862eab6e 7184675 0 2020-05-25 11:55:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc005771737 0xc005771738}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 11:55:11 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 49 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.211,StartTime:2020-05-25 11:55:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 11:55:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a76f1aa2961c43ace30f639d50f05ae9a2dd0260119586a838acd3ec702485d0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.211,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.591: INFO: Pod "webserver-deployment-84855cf797-xd57w" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-xd57w webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-xd57w a87a4ec0-a6d1-4926-9f0f-3f2cd40f3db3 7184824 0 2020-05-25 11:55:16 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc0057718e7 0xc0057718e8}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-25 11:55:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.591: INFO: Pod "webserver-deployment-84855cf797-xkxrr" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-xkxrr webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-xkxrr da50fd67-b252-4ebe-aab2-b98e113a04ae 7184809 0 2020-05-25 11:55:17 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc005771a77 0xc005771a78}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.591: INFO: Pod "webserver-deployment-84855cf797-zttqv" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-zttqv webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-zttqv 531709ae-7472-4871-bf85-c33319c68046 7184784 0 2020-05-25 11:55:16 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc005771ba7 0xc005771ba8}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 25 11:55:17.591: INFO: Pod "webserver-deployment-84855cf797-zw79r" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-zw79r webserver-deployment-84855cf797- deployment-222 /api/v1/namespaces/deployment-222/pods/webserver-deployment-84855cf797-zw79r 96928902-bec3-4cd1-a612-2f48f9b32ece 7184813 0 2020-05-25 11:55:17 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 405f2fa6-0721-46d8-bcd1-d2d3780ef9d8 0xc005771cd7 0xc005771cd8}] []  [{kube-controller-manager Update v1 2020-05-25 11:55:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 53 102 50 102 97 54 45 48 55 50 49 45 52 54 100 56 45 98 99 100 49 45 100 50 100 51 55 56 48 101 102 57 100 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk779,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk779,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:55:17.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-222" for this suite.

• [SLOW TEST:18.070 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":204,"skipped":3425,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:55:17.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 25 11:55:20.695: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 25 11:55:22.773: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004521, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 11:55:25.066: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004521, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 11:55:26.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004521, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 11:55:28.798: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004521, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 11:55:31.208: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004521, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 11:55:33.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004521, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 11:55:34.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004521, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 11:55:37.094: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004521, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 11:55:38.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004521, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004520, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 25 11:55:42.419: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:55:43.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1198" for this suite.
STEP: Destroying namespace "webhook-1198-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:27.767 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":205,"skipped":3429,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:55:45.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 25 11:55:47.119: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f015ca3-e215-4209-8a00-cec00bc1ac3d" in namespace "projected-7137" to be "Succeeded or Failed"
May 25 11:55:47.123: INFO: Pod "downwardapi-volume-3f015ca3-e215-4209-8a00-cec00bc1ac3d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.846107ms
May 25 11:55:49.435: INFO: Pod "downwardapi-volume-3f015ca3-e215-4209-8a00-cec00bc1ac3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315908651s
May 25 11:55:51.466: INFO: Pod "downwardapi-volume-3f015ca3-e215-4209-8a00-cec00bc1ac3d": Phase="Running", Reason="", readiness=true. Elapsed: 4.346725588s
May 25 11:55:53.478: INFO: Pod "downwardapi-volume-3f015ca3-e215-4209-8a00-cec00bc1ac3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.359189536s
STEP: Saw pod success
May 25 11:55:53.479: INFO: Pod "downwardapi-volume-3f015ca3-e215-4209-8a00-cec00bc1ac3d" satisfied condition "Succeeded or Failed"
May 25 11:55:53.482: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-3f015ca3-e215-4209-8a00-cec00bc1ac3d container client-container: 
STEP: delete the pod
May 25 11:55:53.555: INFO: Waiting for pod downwardapi-volume-3f015ca3-e215-4209-8a00-cec00bc1ac3d to disappear
May 25 11:55:53.565: INFO: Pod downwardapi-volume-3f015ca3-e215-4209-8a00-cec00bc1ac3d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:55:53.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7137" for this suite.

• [SLOW TEST:8.205 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3436,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:55:53.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-7a9572ec-5fa8-49ef-910c-60390f5ad05b
STEP: Creating a pod to test consume secrets
May 25 11:55:54.010: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f1e3c8e4-48be-490c-8d83-1e0327e82fc7" in namespace "projected-7077" to be "Succeeded or Failed"
May 25 11:55:54.031: INFO: Pod "pod-projected-secrets-f1e3c8e4-48be-490c-8d83-1e0327e82fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.130693ms
May 25 11:55:56.245: INFO: Pod "pod-projected-secrets-f1e3c8e4-48be-490c-8d83-1e0327e82fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235512823s
May 25 11:55:58.249: INFO: Pod "pod-projected-secrets-f1e3c8e4-48be-490c-8d83-1e0327e82fc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.239624238s
STEP: Saw pod success
May 25 11:55:58.250: INFO: Pod "pod-projected-secrets-f1e3c8e4-48be-490c-8d83-1e0327e82fc7" satisfied condition "Succeeded or Failed"
May 25 11:55:58.252: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-f1e3c8e4-48be-490c-8d83-1e0327e82fc7 container projected-secret-volume-test: 
STEP: delete the pod
May 25 11:55:58.301: INFO: Waiting for pod pod-projected-secrets-f1e3c8e4-48be-490c-8d83-1e0327e82fc7 to disappear
May 25 11:55:58.314: INFO: Pod pod-projected-secrets-f1e3c8e4-48be-490c-8d83-1e0327e82fc7 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:55:58.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7077" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":207,"skipped":3457,"failed":0}
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:55:58.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May 25 11:55:58.425: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:56:04.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3976" for this suite.

• [SLOW TEST:6.412 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":208,"skipped":3458,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:56:04.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:56:22.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9850" for this suite.

• [SLOW TEST:17.318 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":209,"skipped":3470,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:56:22.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:56:28.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8579" for this suite.

• [SLOW TEST:6.507 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":210,"skipped":3508,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:56:28.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May 25 11:56:28.900: INFO: Waiting up to 5m0s for pod "downward-api-3a3edb2b-b564-4d28-ba9f-8848b9c88246" in namespace "downward-api-8558" to be "Succeeded or Failed"
May 25 11:56:29.101: INFO: Pod "downward-api-3a3edb2b-b564-4d28-ba9f-8848b9c88246": Phase="Pending", Reason="", readiness=false. Elapsed: 200.810902ms
May 25 11:56:31.105: INFO: Pod "downward-api-3a3edb2b-b564-4d28-ba9f-8848b9c88246": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20437931s
May 25 11:56:33.109: INFO: Pod "downward-api-3a3edb2b-b564-4d28-ba9f-8848b9c88246": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.208487857s
STEP: Saw pod success
May 25 11:56:33.109: INFO: Pod "downward-api-3a3edb2b-b564-4d28-ba9f-8848b9c88246" satisfied condition "Succeeded or Failed"
May 25 11:56:33.112: INFO: Trying to get logs from node kali-worker pod downward-api-3a3edb2b-b564-4d28-ba9f-8848b9c88246 container dapi-container: 
STEP: delete the pod
May 25 11:56:33.170: INFO: Waiting for pod downward-api-3a3edb2b-b564-4d28-ba9f-8848b9c88246 to disappear
May 25 11:56:33.496: INFO: Pod downward-api-3a3edb2b-b564-4d28-ba9f-8848b9c88246 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:56:33.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8558" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3533,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:56:33.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-b3c64349-f579-4166-89d8-caa4f0502492
STEP: Creating a pod to test consume configMaps
May 25 11:56:33.682: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-be4608d9-dbf6-4ac1-9ea0-3f34c44a8fe0" in namespace "projected-6515" to be "Succeeded or Failed"
May 25 11:56:33.694: INFO: Pod "pod-projected-configmaps-be4608d9-dbf6-4ac1-9ea0-3f34c44a8fe0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.090551ms
May 25 11:56:35.699: INFO: Pod "pod-projected-configmaps-be4608d9-dbf6-4ac1-9ea0-3f34c44a8fe0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016522638s
May 25 11:56:37.706: INFO: Pod "pod-projected-configmaps-be4608d9-dbf6-4ac1-9ea0-3f34c44a8fe0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023971829s
STEP: Saw pod success
May 25 11:56:37.706: INFO: Pod "pod-projected-configmaps-be4608d9-dbf6-4ac1-9ea0-3f34c44a8fe0" satisfied condition "Succeeded or Failed"
May 25 11:56:37.709: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-be4608d9-dbf6-4ac1-9ea0-3f34c44a8fe0 container projected-configmap-volume-test: 
STEP: delete the pod
May 25 11:56:37.770: INFO: Waiting for pod pod-projected-configmaps-be4608d9-dbf6-4ac1-9ea0-3f34c44a8fe0 to disappear
May 25 11:56:37.789: INFO: Pod pod-projected-configmaps-be4608d9-dbf6-4ac1-9ea0-3f34c44a8fe0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:56:37.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6515" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3556,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:56:37.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:56:37.959: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-e946eaf3-891d-4560-b854-175a64be0209" in namespace "security-context-test-9472" to be "Succeeded or Failed"
May 25 11:56:38.176: INFO: Pod "busybox-readonly-false-e946eaf3-891d-4560-b854-175a64be0209": Phase="Pending", Reason="", readiness=false. Elapsed: 216.624774ms
May 25 11:56:40.180: INFO: Pod "busybox-readonly-false-e946eaf3-891d-4560-b854-175a64be0209": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220357321s
May 25 11:56:42.184: INFO: Pod "busybox-readonly-false-e946eaf3-891d-4560-b854-175a64be0209": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.224544172s
May 25 11:56:42.184: INFO: Pod "busybox-readonly-false-e946eaf3-891d-4560-b854-175a64be0209" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:56:42.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9472" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3584,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:56:42.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test hostPath mode
May 25 11:56:42.469: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-879" to be "Succeeded or Failed"
May 25 11:56:42.484: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.809459ms
May 25 11:56:44.488: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018894839s
May 25 11:56:46.659: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189847417s
May 25 11:56:48.662: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.193230464s
STEP: Saw pod success
May 25 11:56:48.662: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
May 25 11:56:48.665: INFO: Trying to get logs from node kali-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
May 25 11:56:48.864: INFO: Waiting for pod pod-host-path-test to disappear
May 25 11:56:48.872: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:56:48.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-879" for this suite.

• [SLOW TEST:6.688 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":214,"skipped":3592,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:56:48.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:56:53.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9337" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3618,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:56:53.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-d9c14c22-0a88-451c-b830-2b7c0fc4867d
STEP: Creating a pod to test consume configMaps
May 25 11:56:53.420: INFO: Waiting up to 5m0s for pod "pod-configmaps-08ee4044-3b6c-4da5-b25d-4d1ab1777d76" in namespace "configmap-7803" to be "Succeeded or Failed"
May 25 11:56:53.466: INFO: Pod "pod-configmaps-08ee4044-3b6c-4da5-b25d-4d1ab1777d76": Phase="Pending", Reason="", readiness=false. Elapsed: 46.183872ms
May 25 11:56:55.471: INFO: Pod "pod-configmaps-08ee4044-3b6c-4da5-b25d-4d1ab1777d76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050549983s
May 25 11:56:57.474: INFO: Pod "pod-configmaps-08ee4044-3b6c-4da5-b25d-4d1ab1777d76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054048332s
STEP: Saw pod success
May 25 11:56:57.474: INFO: Pod "pod-configmaps-08ee4044-3b6c-4da5-b25d-4d1ab1777d76" satisfied condition "Succeeded or Failed"
May 25 11:56:57.477: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-08ee4044-3b6c-4da5-b25d-4d1ab1777d76 container configmap-volume-test: 
STEP: delete the pod
May 25 11:56:57.562: INFO: Waiting for pod pod-configmaps-08ee4044-3b6c-4da5-b25d-4d1ab1777d76 to disappear
May 25 11:56:57.580: INFO: Pod pod-configmaps-08ee4044-3b6c-4da5-b25d-4d1ab1777d76 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:56:57.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7803" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":216,"skipped":3634,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:56:57.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:56:57.731: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"bd2471a5-d6e2-4eec-9571-7366b452e4fb", Controller:(*bool)(0xc003656132), BlockOwnerDeletion:(*bool)(0xc003656133)}}
May 25 11:56:57.836: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"9bdea01e-6b14-49b0-985b-2486e7d3839a", Controller:(*bool)(0xc002766282), BlockOwnerDeletion:(*bool)(0xc002766283)}}
May 25 11:56:57.851: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"faba99a5-5db1-46dc-b49a-43ba49cfaab7", Controller:(*bool)(0xc004173faa), BlockOwnerDeletion:(*bool)(0xc004173fab)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:57:02.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7458" for this suite.

• [SLOW TEST:5.389 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":217,"skipped":3672,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:57:02.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 25 11:57:03.959: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 25 11:57:05.969: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004624, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004624, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004624, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004623, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 25 11:57:09.055: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:57:09.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7346-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:57:10.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3769" for this suite.
STEP: Destroying namespace "webhook-3769-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.442 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":218,"skipped":3677,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:57:10.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
May 25 11:57:12.177: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
May 25 11:57:14.189: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004632, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004632, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004632, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004631, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 11:57:16.192: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004632, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004632, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004632, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004631, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 25 11:57:19.434: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:57:19.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:57:21.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-2037" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:11.086 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":219,"skipped":3693,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:57:21.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:57:21.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config version'
May 25 11:57:22.037: INFO: stderr: ""
May 25 11:57:22.037: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:20Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:57:22.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8099" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":275,"completed":220,"skipped":3694,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:57:22.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:57:22.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-7668" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":221,"skipped":3701,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:57:22.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:58:22.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3558" for this suite.

• [SLOW TEST:60.105 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3723,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:58:22.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
May 25 11:58:22.945: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4157 /api/v1/namespaces/watch-4157/configmaps/e2e-watch-test-watch-closed 711a82eb-5b40-40a3-92b8-c65c0faad9a8 7186268 0 2020-05-25 11:58:22 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-25 11:58:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May 25 11:58:22.945: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4157 /api/v1/namespaces/watch-4157/configmaps/e2e-watch-test-watch-closed 711a82eb-5b40-40a3-92b8-c65c0faad9a8 7186270 0 2020-05-25 11:58:22 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-25 11:58:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
May 25 11:58:22.969: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4157 /api/v1/namespaces/watch-4157/configmaps/e2e-watch-test-watch-closed 711a82eb-5b40-40a3-92b8-c65c0faad9a8 7186271 0 2020-05-25 11:58:22 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-25 11:58:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 25 11:58:22.970: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4157 /api/v1/namespaces/watch-4157/configmaps/e2e-watch-test-watch-closed 711a82eb-5b40-40a3-92b8-c65c0faad9a8 7186272 0 2020-05-25 11:58:22 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-25 11:58:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:58:22.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4157" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":223,"skipped":3730,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:58:22.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:58:30.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6263" for this suite.

• [SLOW TEST:7.449 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":224,"skipped":3737,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:58:30.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 25 11:58:31.070: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 25 11:58:33.264: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004711, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004711, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004711, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004711, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 11:58:35.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004711, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004711, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004711, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004711, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 25 11:58:38.309: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:58:38.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9229" for this suite.
STEP: Destroying namespace "webhook-9229-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.082 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":225,"skipped":3749,"failed":0}
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:58:38.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
May 25 11:58:39.180: INFO: created pod pod-service-account-defaultsa
May 25 11:58:39.180: INFO: pod pod-service-account-defaultsa service account token volume mount: true
May 25 11:58:39.208: INFO: created pod pod-service-account-mountsa
May 25 11:58:39.208: INFO: pod pod-service-account-mountsa service account token volume mount: true
May 25 11:58:39.306: INFO: created pod pod-service-account-nomountsa
May 25 11:58:39.306: INFO: pod pod-service-account-nomountsa service account token volume mount: false
May 25 11:58:39.317: INFO: created pod pod-service-account-defaultsa-mountspec
May 25 11:58:39.317: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
May 25 11:58:39.364: INFO: created pod pod-service-account-mountsa-mountspec
May 25 11:58:39.364: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
May 25 11:58:39.568: INFO: created pod pod-service-account-nomountsa-mountspec
May 25 11:58:39.568: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
May 25 11:58:39.721: INFO: created pod pod-service-account-defaultsa-nomountspec
May 25 11:58:39.721: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
May 25 11:58:39.828: INFO: created pod pod-service-account-mountsa-nomountspec
May 25 11:58:39.828: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
May 25 11:58:39.844: INFO: created pod pod-service-account-nomountsa-nomountspec
May 25 11:58:39.844: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:58:39.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4801" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":275,"completed":226,"skipped":3749,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:58:39.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a secret
STEP: listing secrets in all namespaces to ensure that there are more than zero
STEP: patching the secret
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:58:40.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7368" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":227,"skipped":3786,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:58:40.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
May 25 11:58:58.793: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 25 11:58:58.807: INFO: Pod pod-with-prestop-exec-hook still exists
May 25 11:59:00.807: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 25 11:59:00.813: INFO: Pod pod-with-prestop-exec-hook still exists
May 25 11:59:02.807: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 25 11:59:02.812: INFO: Pod pod-with-prestop-exec-hook still exists
May 25 11:59:04.807: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 25 11:59:04.811: INFO: Pod pod-with-prestop-exec-hook still exists
May 25 11:59:06.807: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 25 11:59:06.812: INFO: Pod pod-with-prestop-exec-hook still exists
May 25 11:59:08.807: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 25 11:59:08.811: INFO: Pod pod-with-prestop-exec-hook still exists
May 25 11:59:10.807: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 25 11:59:10.811: INFO: Pod pod-with-prestop-exec-hook still exists
May 25 11:59:12.807: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 25 11:59:12.812: INFO: Pod pod-with-prestop-exec-hook still exists
May 25 11:59:14.807: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 25 11:59:14.812: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:59:14.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4295" for this suite.

• [SLOW TEST:34.609 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":3805,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:59:14.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May 25 11:59:19.412: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:59:19.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7722" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":3855,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:59:19.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 11:59:20.154: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
May 25 11:59:20.238: INFO: Pod name sample-pod: Found 0 pods out of 1
May 25 11:59:25.241: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
May 25 11:59:25.241: INFO: Creating deployment "test-rolling-update-deployment"
May 25 11:59:25.245: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
May 25 11:59:25.272: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
May 25 11:59:27.279: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
May 25 11:59:27.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004765, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004765, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004765, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004765, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 11:59:29.488: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004765, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004765, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004765, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004765, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 11:59:31.286: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May 25 11:59:31.296: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-3214 /apis/apps/v1/namespaces/deployment-3214/deployments/test-rolling-update-deployment f140bf0e-e61f-4838-88af-35aa63bc2526 7186774 1 2020-05-25 11:59:25 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  [{e2e.test Update apps/v1 2020-05-25 11:59:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-25 11:59:29 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038c6df8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-25 11:59:25 +0000 UTC,LastTransitionTime:2020-05-25 11:59:25 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-05-25 11:59:29 +0000 UTC,LastTransitionTime:2020-05-25 11:59:25 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

May 25 11:59:31.299: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7  deployment-3214 /apis/apps/v1/namespaces/deployment-3214/replicasets/test-rolling-update-deployment-59d5cb45c7 e10b7bc1-178d-4fc2-9613-f07df6863c44 7186762 1 2020-05-25 11:59:25 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment f140bf0e-e61f-4838-88af-35aa63bc2526 0xc0038c7357 0xc0038c7358}] []  [{kube-controller-manager Update apps/v1 2020-05-25 11:59:29 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 49 52 48 98 102 48 101 45 101 54 49 102 45 52 56 51 56 45 56 56 97 102 45 51 53 97 97 54 51 98 99 50 53 50 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038c73e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
May 25 11:59:31.299: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
May 25 11:59:31.299: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-3214 /apis/apps/v1/namespaces/deployment-3214/replicasets/test-rolling-update-controller 77b0f29c-e54e-4779-b0c8-8602522e94ee 7186773 2 2020-05-25 11:59:20 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment f140bf0e-e61f-4838-88af-35aa63bc2526 0xc0038c721f 0xc0038c7230}] []  [{e2e.test Update apps/v1 2020-05-25 11:59:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-25 11:59:29 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 49 52 48 98 102 48 101 45 101 54 49 102 45 52 56 51 56 45 56 56 97 102 45 51 53 97 97 54 51 98 99 50 53 50 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0038c72e8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May 25 11:59:31.302: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-b57dv" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-b57dv test-rolling-update-deployment-59d5cb45c7- deployment-3214 /api/v1/namespaces/deployment-3214/pods/test-rolling-update-deployment-59d5cb45c7-b57dv 7ee38c3b-9687-4aa4-965a-0e3358cb8dfa 7186761 0 2020-05-25 11:59:25 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 e10b7bc1-178d-4fc2-9613-f07df6863c44 0xc0038c78e7 0xc0038c78e8}] []  [{kube-controller-manager Update v1 2020-05-25 11:59:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 49 48 98 55 98 99 49 45 49 55 56 100 45 52 102 99 50 45 57 54 49 51 45 102 48 55 100 102 54 56 54 51 99 52 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 11:59:29 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 51 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n4dst,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n4dst,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n4dst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:59:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:59:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:59:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 11:59:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.230,StartTime:2020-05-25 11:59:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 11:59:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://7af6dfa1a24044a03273d9daaf44099ca7ed5c6ae900618f7aa60db5cb2c6b45,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.230,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:59:31.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3214" for this suite.

• [SLOW TEST:11.623 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":230,"skipped":3868,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:59:31.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-projected-all-test-volume-5adb964c-3716-4481-abef-35d3407aa204
STEP: Creating secret with name secret-projected-all-test-volume-f47e6e13-ae44-47ff-b585-83b8136a81dd
STEP: Creating a pod to test Check all projections for projected volume plugin
May 25 11:59:31.450: INFO: Waiting up to 5m0s for pod "projected-volume-cb165109-d4d9-4389-83aa-94693f7b418a" in namespace "projected-3510" to be "Succeeded or Failed"
May 25 11:59:31.454: INFO: Pod "projected-volume-cb165109-d4d9-4389-83aa-94693f7b418a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.743849ms
May 25 11:59:33.458: INFO: Pod "projected-volume-cb165109-d4d9-4389-83aa-94693f7b418a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007595933s
May 25 11:59:35.462: INFO: Pod "projected-volume-cb165109-d4d9-4389-83aa-94693f7b418a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011489624s
STEP: Saw pod success
May 25 11:59:35.462: INFO: Pod "projected-volume-cb165109-d4d9-4389-83aa-94693f7b418a" satisfied condition "Succeeded or Failed"
May 25 11:59:35.464: INFO: Trying to get logs from node kali-worker2 pod projected-volume-cb165109-d4d9-4389-83aa-94693f7b418a container projected-all-volume-test: 
STEP: delete the pod
May 25 11:59:35.883: INFO: Waiting for pod projected-volume-cb165109-d4d9-4389-83aa-94693f7b418a to disappear
May 25 11:59:35.964: INFO: Pod projected-volume-cb165109-d4d9-4389-83aa-94693f7b418a no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:59:35.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3510" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":231,"skipped":3873,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:59:35.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 25 11:59:36.106: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f02ad26b-c8c7-4402-bb7a-a91a40a343ee" in namespace "downward-api-4102" to be "Succeeded or Failed"
May 25 11:59:36.159: INFO: Pod "downwardapi-volume-f02ad26b-c8c7-4402-bb7a-a91a40a343ee": Phase="Pending", Reason="", readiness=false. Elapsed: 53.318138ms
May 25 11:59:38.164: INFO: Pod "downwardapi-volume-f02ad26b-c8c7-4402-bb7a-a91a40a343ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057865345s
May 25 11:59:40.168: INFO: Pod "downwardapi-volume-f02ad26b-c8c7-4402-bb7a-a91a40a343ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061947761s
STEP: Saw pod success
May 25 11:59:40.168: INFO: Pod "downwardapi-volume-f02ad26b-c8c7-4402-bb7a-a91a40a343ee" satisfied condition "Succeeded or Failed"
May 25 11:59:40.170: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-f02ad26b-c8c7-4402-bb7a-a91a40a343ee container client-container: 
STEP: delete the pod
May 25 11:59:40.264: INFO: Waiting for pod downwardapi-volume-f02ad26b-c8c7-4402-bb7a-a91a40a343ee to disappear
May 25 11:59:40.300: INFO: Pod downwardapi-volume-f02ad26b-c8c7-4402-bb7a-a91a40a343ee no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:59:40.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4102" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":3891,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:59:40.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test env composition
May 25 11:59:40.441: INFO: Waiting up to 5m0s for pod "var-expansion-5b53c9df-c4a1-442b-9fb1-77000aef9231" in namespace "var-expansion-5366" to be "Succeeded or Failed"
May 25 11:59:40.480: INFO: Pod "var-expansion-5b53c9df-c4a1-442b-9fb1-77000aef9231": Phase="Pending", Reason="", readiness=false. Elapsed: 39.178016ms
May 25 11:59:42.840: INFO: Pod "var-expansion-5b53c9df-c4a1-442b-9fb1-77000aef9231": Phase="Pending", Reason="", readiness=false. Elapsed: 2.398263473s
May 25 11:59:44.843: INFO: Pod "var-expansion-5b53c9df-c4a1-442b-9fb1-77000aef9231": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.401237084s
STEP: Saw pod success
May 25 11:59:44.843: INFO: Pod "var-expansion-5b53c9df-c4a1-442b-9fb1-77000aef9231" satisfied condition "Succeeded or Failed"
May 25 11:59:44.845: INFO: Trying to get logs from node kali-worker2 pod var-expansion-5b53c9df-c4a1-442b-9fb1-77000aef9231 container dapi-container: 
STEP: delete the pod
May 25 11:59:44.871: INFO: Waiting for pod var-expansion-5b53c9df-c4a1-442b-9fb1-77000aef9231 to disappear
May 25 11:59:44.970: INFO: Pod var-expansion-5b53c9df-c4a1-442b-9fb1-77000aef9231 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 11:59:44.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5366" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":233,"skipped":3915,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 11:59:44.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-5269
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-5269
I0525 11:59:45.309794       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5269, replica count: 2
I0525 11:59:48.360288       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0525 11:59:51.360562       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May 25 11:59:51.360: INFO: Creating new exec pod
May 25 11:59:56.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5269 execpod8czj5 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
May 25 11:59:59.897: INFO: stderr: "I0525 11:59:59.768833    2607 log.go:172] (0xc00003a420) (0xc000665860) Create stream\nI0525 11:59:59.768870    2607 log.go:172] (0xc00003a420) (0xc000665860) Stream added, broadcasting: 1\nI0525 11:59:59.771415    2607 log.go:172] (0xc00003a420) Reply frame received for 1\nI0525 11:59:59.771470    2607 log.go:172] (0xc00003a420) (0xc000665900) Create stream\nI0525 11:59:59.771483    2607 log.go:172] (0xc00003a420) (0xc000665900) Stream added, broadcasting: 3\nI0525 11:59:59.772192    2607 log.go:172] (0xc00003a420) Reply frame received for 3\nI0525 11:59:59.772224    2607 log.go:172] (0xc00003a420) (0xc0006659a0) Create stream\nI0525 11:59:59.772235    2607 log.go:172] (0xc00003a420) (0xc0006659a0) Stream added, broadcasting: 5\nI0525 11:59:59.772844    2607 log.go:172] (0xc00003a420) Reply frame received for 5\nI0525 11:59:59.855855    2607 log.go:172] (0xc00003a420) Data frame received for 5\nI0525 11:59:59.855884    2607 log.go:172] (0xc0006659a0) (5) Data frame handling\nI0525 11:59:59.855904    2607 log.go:172] (0xc0006659a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0525 11:59:59.888619    2607 log.go:172] (0xc00003a420) Data frame received for 5\nI0525 11:59:59.888661    2607 log.go:172] (0xc0006659a0) (5) Data frame handling\nI0525 11:59:59.888684    2607 log.go:172] (0xc0006659a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0525 11:59:59.888955    2607 log.go:172] (0xc00003a420) Data frame received for 3\nI0525 11:59:59.888976    2607 log.go:172] (0xc000665900) (3) Data frame handling\nI0525 11:59:59.889235    2607 log.go:172] (0xc00003a420) Data frame received for 5\nI0525 11:59:59.889309    2607 log.go:172] (0xc0006659a0) (5) Data frame handling\nI0525 11:59:59.891388    2607 log.go:172] (0xc00003a420) Data frame received for 1\nI0525 11:59:59.891428    2607 log.go:172] (0xc000665860) (1) Data frame handling\nI0525 11:59:59.891464    2607 log.go:172] (0xc000665860) (1) Data frame sent\nI0525 11:59:59.891491    2607 log.go:172] (0xc00003a420) (0xc000665860) Stream removed, broadcasting: 1\nI0525 11:59:59.891511    2607 log.go:172] (0xc00003a420) Go away received\nI0525 11:59:59.891831    2607 log.go:172] (0xc00003a420) (0xc000665860) Stream removed, broadcasting: 1\nI0525 11:59:59.891847    2607 log.go:172] (0xc00003a420) (0xc000665900) Stream removed, broadcasting: 3\nI0525 11:59:59.891853    2607 log.go:172] (0xc00003a420) (0xc0006659a0) Stream removed, broadcasting: 5\n"
May 25 11:59:59.897: INFO: stdout: ""
May 25 11:59:59.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5269 execpod8czj5 -- /bin/sh -x -c nc -zv -t -w 2 10.105.84.64 80'
May 25 12:00:00.165: INFO: stderr: "I0525 12:00:00.095074    2638 log.go:172] (0xc0007f6b00) (0xc0007e6320) Create stream\nI0525 12:00:00.095133    2638 log.go:172] (0xc0007f6b00) (0xc0007e6320) Stream added, broadcasting: 1\nI0525 12:00:00.097538    2638 log.go:172] (0xc0007f6b00) Reply frame received for 1\nI0525 12:00:00.097587    2638 log.go:172] (0xc0007f6b00) (0xc00033d180) Create stream\nI0525 12:00:00.097601    2638 log.go:172] (0xc0007f6b00) (0xc00033d180) Stream added, broadcasting: 3\nI0525 12:00:00.098526    2638 log.go:172] (0xc0007f6b00) Reply frame received for 3\nI0525 12:00:00.098574    2638 log.go:172] (0xc0007f6b00) (0xc00032a000) Create stream\nI0525 12:00:00.098594    2638 log.go:172] (0xc0007f6b00) (0xc00032a000) Stream added, broadcasting: 5\nI0525 12:00:00.099581    2638 log.go:172] (0xc0007f6b00) Reply frame received for 5\nI0525 12:00:00.160728    2638 log.go:172] (0xc0007f6b00) Data frame received for 3\nI0525 12:00:00.160758    2638 log.go:172] (0xc00033d180) (3) Data frame handling\nI0525 12:00:00.160789    2638 log.go:172] (0xc0007f6b00) Data frame received for 5\nI0525 12:00:00.160823    2638 log.go:172] (0xc00032a000) (5) Data frame handling\nI0525 12:00:00.160846    2638 log.go:172] (0xc00032a000) (5) Data frame sent\n+ nc -zv -t -w 2 10.105.84.64 80\nConnection to 10.105.84.64 80 port [tcp/http] succeeded!\nI0525 12:00:00.160860    2638 log.go:172] (0xc0007f6b00) Data frame received for 5\nI0525 12:00:00.160872    2638 log.go:172] (0xc00032a000) (5) Data frame handling\nI0525 12:00:00.162274    2638 log.go:172] (0xc0007f6b00) Data frame received for 1\nI0525 12:00:00.162294    2638 log.go:172] (0xc0007e6320) (1) Data frame handling\nI0525 12:00:00.162307    2638 log.go:172] (0xc0007e6320) (1) Data frame sent\nI0525 12:00:00.162330    2638 log.go:172] (0xc0007f6b00) (0xc0007e6320) Stream removed, broadcasting: 1\nI0525 12:00:00.162356    2638 log.go:172] (0xc0007f6b00) Go away received\nI0525 12:00:00.162633    2638 log.go:172] (0xc0007f6b00) (0xc0007e6320) Stream removed, broadcasting: 1\nI0525 12:00:00.162648    2638 log.go:172] (0xc0007f6b00) (0xc00033d180) Stream removed, broadcasting: 3\nI0525 12:00:00.162655    2638 log.go:172] (0xc0007f6b00) (0xc00032a000) Stream removed, broadcasting: 5\n"
May 25 12:00:00.165: INFO: stdout: ""
May 25 12:00:00.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5269 execpod8czj5 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 30749'
May 25 12:00:00.393: INFO: stderr: "I0525 12:00:00.300694    2661 log.go:172] (0xc0000e8630) (0xc0004f0aa0) Create stream\nI0525 12:00:00.300783    2661 log.go:172] (0xc0000e8630) (0xc0004f0aa0) Stream added, broadcasting: 1\nI0525 12:00:00.303876    2661 log.go:172] (0xc0000e8630) Reply frame received for 1\nI0525 12:00:00.303929    2661 log.go:172] (0xc0000e8630) (0xc0009e6000) Create stream\nI0525 12:00:00.303956    2661 log.go:172] (0xc0000e8630) (0xc0009e6000) Stream added, broadcasting: 3\nI0525 12:00:00.305080    2661 log.go:172] (0xc0000e8630) Reply frame received for 3\nI0525 12:00:00.305260    2661 log.go:172] (0xc0000e8630) (0xc000bac000) Create stream\nI0525 12:00:00.305274    2661 log.go:172] (0xc0000e8630) (0xc000bac000) Stream added, broadcasting: 5\nI0525 12:00:00.306454    2661 log.go:172] (0xc0000e8630) Reply frame received for 5\nI0525 12:00:00.385720    2661 log.go:172] (0xc0000e8630) Data frame received for 5\nI0525 12:00:00.385748    2661 log.go:172] (0xc000bac000) (5) Data frame handling\nI0525 12:00:00.385776    2661 log.go:172] (0xc000bac000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.15 30749\nI0525 12:00:00.385976    2661 log.go:172] (0xc0000e8630) Data frame received for 5\nI0525 12:00:00.386021    2661 log.go:172] (0xc000bac000) (5) Data frame handling\nI0525 12:00:00.386045    2661 log.go:172] (0xc000bac000) (5) Data frame sent\nConnection to 172.17.0.15 30749 port [tcp/30749] succeeded!\nI0525 12:00:00.386151    2661 log.go:172] (0xc0000e8630) Data frame received for 5\nI0525 12:00:00.386178    2661 log.go:172] (0xc000bac000) (5) Data frame handling\nI0525 12:00:00.386328    2661 log.go:172] (0xc0000e8630) Data frame received for 3\nI0525 12:00:00.386356    2661 log.go:172] (0xc0009e6000) (3) Data frame handling\nI0525 12:00:00.387978    2661 log.go:172] (0xc0000e8630) Data frame received for 1\nI0525 12:00:00.387994    2661 log.go:172] (0xc0004f0aa0) (1) Data frame handling\nI0525 12:00:00.388004    2661 log.go:172] (0xc0004f0aa0) (1) Data frame sent\nI0525 12:00:00.388019    2661 log.go:172] (0xc0000e8630) (0xc0004f0aa0) Stream removed, broadcasting: 1\nI0525 12:00:00.388035    2661 log.go:172] (0xc0000e8630) Go away received\nI0525 12:00:00.388416    2661 log.go:172] (0xc0000e8630) (0xc0004f0aa0) Stream removed, broadcasting: 1\nI0525 12:00:00.388442    2661 log.go:172] (0xc0000e8630) (0xc0009e6000) Stream removed, broadcasting: 3\nI0525 12:00:00.388454    2661 log.go:172] (0xc0000e8630) (0xc000bac000) Stream removed, broadcasting: 5\n"
May 25 12:00:00.393: INFO: stdout: ""
May 25 12:00:00.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5269 execpod8czj5 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 30749'
May 25 12:00:00.615: INFO: stderr: "I0525 12:00:00.536987    2681 log.go:172] (0xc000a7a160) (0xc000a4a0a0) Create stream\nI0525 12:00:00.537048    2681 log.go:172] (0xc000a7a160) (0xc000a4a0a0) Stream added, broadcasting: 1\nI0525 12:00:00.539752    2681 log.go:172] (0xc000a7a160) Reply frame received for 1\nI0525 12:00:00.539794    2681 log.go:172] (0xc000a7a160) (0xc000a4a140) Create stream\nI0525 12:00:00.539807    2681 log.go:172] (0xc000a7a160) (0xc000a4a140) Stream added, broadcasting: 3\nI0525 12:00:00.540659    2681 log.go:172] (0xc000a7a160) Reply frame received for 3\nI0525 12:00:00.540699    2681 log.go:172] (0xc000a7a160) (0xc0006d9220) Create stream\nI0525 12:00:00.540713    2681 log.go:172] (0xc000a7a160) (0xc0006d9220) Stream added, broadcasting: 5\nI0525 12:00:00.541905    2681 log.go:172] (0xc000a7a160) Reply frame received for 5\nI0525 12:00:00.608053    2681 log.go:172] (0xc000a7a160) Data frame received for 3\nI0525 12:00:00.608093    2681 log.go:172] (0xc000a4a140) (3) Data frame handling\nI0525 12:00:00.608127    2681 log.go:172] (0xc000a7a160) Data frame received for 5\nI0525 12:00:00.608150    2681 log.go:172] (0xc0006d9220) (5) Data frame handling\nI0525 12:00:00.608164    2681 log.go:172] (0xc0006d9220) (5) Data frame sent\nI0525 12:00:00.608175    2681 log.go:172] (0xc000a7a160) Data frame received for 5\nI0525 12:00:00.608183    2681 log.go:172] (0xc0006d9220) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 30749\nConnection to 172.17.0.18 30749 port [tcp/30749] succeeded!\nI0525 12:00:00.610005    2681 log.go:172] (0xc000a7a160) Data frame received for 1\nI0525 12:00:00.610037    2681 log.go:172] (0xc000a4a0a0) (1) Data frame handling\nI0525 12:00:00.610057    2681 log.go:172] (0xc000a4a0a0) (1) Data frame sent\nI0525 12:00:00.610072    2681 log.go:172] (0xc000a7a160) (0xc000a4a0a0) Stream removed, broadcasting: 1\nI0525 12:00:00.610091    2681 log.go:172] (0xc000a7a160) Go away received\nI0525 12:00:00.610545    2681 log.go:172] (0xc000a7a160) (0xc000a4a0a0) Stream removed, broadcasting: 1\nI0525 12:00:00.610569    2681 log.go:172] (0xc000a7a160) (0xc000a4a140) Stream removed, broadcasting: 3\nI0525 12:00:00.610581    2681 log.go:172] (0xc000a7a160) (0xc0006d9220) Stream removed, broadcasting: 5\n"
May 25 12:00:00.615: INFO: stdout: ""
May 25 12:00:00.615: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:00:00.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5269" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:15.772 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":234,"skipped":3928,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:00:00.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
May 25 12:00:07.211: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3545 PodName:pod-sharedvolume-8f691827-66bf-46ed-86d1-13b1e2ea8b5d ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 25 12:00:07.211: INFO: >>> kubeConfig: /root/.kube/config
I0525 12:00:07.240774       7 log.go:172] (0xc00187c790) (0xc000415cc0) Create stream
I0525 12:00:07.240809       7 log.go:172] (0xc00187c790) (0xc000415cc0) Stream added, broadcasting: 1
I0525 12:00:07.242973       7 log.go:172] (0xc00187c790) Reply frame received for 1
I0525 12:00:07.243011       7 log.go:172] (0xc00187c790) (0xc002908e60) Create stream
I0525 12:00:07.243024       7 log.go:172] (0xc00187c790) (0xc002908e60) Stream added, broadcasting: 3
I0525 12:00:07.243910       7 log.go:172] (0xc00187c790) Reply frame received for 3
I0525 12:00:07.243954       7 log.go:172] (0xc00187c790) (0xc001268f00) Create stream
I0525 12:00:07.243965       7 log.go:172] (0xc00187c790) (0xc001268f00) Stream added, broadcasting: 5
I0525 12:00:07.244833       7 log.go:172] (0xc00187c790) Reply frame received for 5
I0525 12:00:07.314411       7 log.go:172] (0xc00187c790) Data frame received for 5
I0525 12:00:07.314432       7 log.go:172] (0xc001268f00) (5) Data frame handling
I0525 12:00:07.314484       7 log.go:172] (0xc00187c790) Data frame received for 3
I0525 12:00:07.314514       7 log.go:172] (0xc002908e60) (3) Data frame handling
I0525 12:00:07.314534       7 log.go:172] (0xc002908e60) (3) Data frame sent
I0525 12:00:07.314545       7 log.go:172] (0xc00187c790) Data frame received for 3
I0525 12:00:07.314556       7 log.go:172] (0xc002908e60) (3) Data frame handling
I0525 12:00:07.316176       7 log.go:172] (0xc00187c790) Data frame received for 1
I0525 12:00:07.316218       7 log.go:172] (0xc000415cc0) (1) Data frame handling
I0525 12:00:07.316239       7 log.go:172] (0xc000415cc0) (1) Data frame sent
I0525 12:00:07.316250       7 log.go:172] (0xc00187c790) (0xc000415cc0) Stream removed, broadcasting: 1
I0525 12:00:07.316268       7 log.go:172] (0xc00187c790) Go away received
I0525 12:00:07.316410       7 log.go:172] (0xc00187c790) (0xc000415cc0) Stream removed, broadcasting: 1
I0525 12:00:07.316435       7 log.go:172] (0xc00187c790) (0xc002908e60) Stream removed, broadcasting: 3
I0525 12:00:07.316449       7 log.go:172] (0xc00187c790) (0xc001268f00) Stream removed, broadcasting: 5
May 25 12:00:07.316: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:00:07.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3545" for this suite.

• [SLOW TEST:6.599 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":235,"skipped":3945,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:00:07.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:00:14.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1075" for this suite.
STEP: Destroying namespace "nsdeletetest-4375" for this suite.
May 25 12:00:14.043: INFO: Namespace nsdeletetest-4375 was already deleted
STEP: Destroying namespace "nsdeletetest-5996" for this suite.

• [SLOW TEST:6.696 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":236,"skipped":3957,"failed":0}
S
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:00:14.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:00:14.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2798" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":237,"skipped":3958,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:00:14.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 25 12:00:14.309: INFO: Waiting up to 5m0s for pod "pod-c40862cb-c30c-45a1-b8a5-a3604d471546" in namespace "emptydir-1392" to be "Succeeded or Failed"
May 25 12:00:14.375: INFO: Pod "pod-c40862cb-c30c-45a1-b8a5-a3604d471546": Phase="Pending", Reason="", readiness=false. Elapsed: 65.464371ms
May 25 12:00:16.546: INFO: Pod "pod-c40862cb-c30c-45a1-b8a5-a3604d471546": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237040473s
May 25 12:00:18.550: INFO: Pod "pod-c40862cb-c30c-45a1-b8a5-a3604d471546": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240265372s
May 25 12:00:20.554: INFO: Pod "pod-c40862cb-c30c-45a1-b8a5-a3604d471546": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.244443908s
STEP: Saw pod success
May 25 12:00:20.554: INFO: Pod "pod-c40862cb-c30c-45a1-b8a5-a3604d471546" satisfied condition "Succeeded or Failed"
May 25 12:00:20.556: INFO: Trying to get logs from node kali-worker pod pod-c40862cb-c30c-45a1-b8a5-a3604d471546 container test-container: 
STEP: delete the pod
May 25 12:00:20.602: INFO: Waiting for pod pod-c40862cb-c30c-45a1-b8a5-a3604d471546 to disappear
May 25 12:00:20.615: INFO: Pod pod-c40862cb-c30c-45a1-b8a5-a3604d471546 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:00:20.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1392" for this suite.

• [SLOW TEST:6.443 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":3959,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:00:20.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 25 12:00:21.341: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 25 12:00:23.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004821, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004821, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004821, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004821, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 12:00:25.355: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004821, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004821, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004821, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004821, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 25 12:00:28.386: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 12:00:28.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-504-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:00:30.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6338" for this suite.
STEP: Destroying namespace "webhook-6338-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.866 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":239,"skipped":3962,"failed":0}
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:00:30.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May 25 12:00:30.606: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:00:39.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4431" for this suite.

• [SLOW TEST:8.609 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":240,"skipped":3962,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:00:39.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 25 12:00:39.195: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0dc6764-c890-40f7-950d-6a8db93e8b52" in namespace "projected-5608" to be "Succeeded or Failed"
May 25 12:00:39.200: INFO: Pod "downwardapi-volume-a0dc6764-c890-40f7-950d-6a8db93e8b52": Phase="Pending", Reason="", readiness=false. Elapsed: 5.07114ms
May 25 12:00:41.204: INFO: Pod "downwardapi-volume-a0dc6764-c890-40f7-950d-6a8db93e8b52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008714386s
May 25 12:00:43.208: INFO: Pod "downwardapi-volume-a0dc6764-c890-40f7-950d-6a8db93e8b52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012847377s
STEP: Saw pod success
May 25 12:00:43.208: INFO: Pod "downwardapi-volume-a0dc6764-c890-40f7-950d-6a8db93e8b52" satisfied condition "Succeeded or Failed"
May 25 12:00:43.212: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-a0dc6764-c890-40f7-950d-6a8db93e8b52 container client-container: 
STEP: delete the pod
May 25 12:00:43.234: INFO: Waiting for pod downwardapi-volume-a0dc6764-c890-40f7-950d-6a8db93e8b52 to disappear
May 25 12:00:43.255: INFO: Pod downwardapi-volume-a0dc6764-c890-40f7-950d-6a8db93e8b52 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:00:43.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5608" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":241,"skipped":3971,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:00:43.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
May 25 12:00:48.461: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:00:48.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4964" for this suite.

• [SLOW TEST:5.464 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":242,"skipped":4014,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:00:48.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 25 12:00:50.802: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 25 12:00:52.895: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004851, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004851, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004851, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004850, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 12:00:54.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004851, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004851, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004851, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004850, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 25 12:00:57.954: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:00:58.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4271" for this suite.
STEP: Destroying namespace "webhook-4271-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.541 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":243,"skipped":4029,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:00:58.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 25 12:00:58.368: INFO: Waiting up to 5m0s for pod "pod-101943e1-2e45-4234-a3a8-556e19a6494f" in namespace "emptydir-4502" to be "Succeeded or Failed"
May 25 12:00:58.377: INFO: Pod "pod-101943e1-2e45-4234-a3a8-556e19a6494f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.860067ms
May 25 12:01:00.469: INFO: Pod "pod-101943e1-2e45-4234-a3a8-556e19a6494f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101222574s
May 25 12:01:02.553: INFO: Pod "pod-101943e1-2e45-4234-a3a8-556e19a6494f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.184966256s
STEP: Saw pod success
May 25 12:01:02.553: INFO: Pod "pod-101943e1-2e45-4234-a3a8-556e19a6494f" satisfied condition "Succeeded or Failed"
May 25 12:01:02.556: INFO: Trying to get logs from node kali-worker2 pod pod-101943e1-2e45-4234-a3a8-556e19a6494f container test-container: 
STEP: delete the pod
May 25 12:01:02.612: INFO: Waiting for pod pod-101943e1-2e45-4234-a3a8-556e19a6494f to disappear
May 25 12:01:02.708: INFO: Pod pod-101943e1-2e45-4234-a3a8-556e19a6494f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:01:02.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4502" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":244,"skipped":4061,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:01:02.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
May 25 12:01:03.582: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created
May 25 12:01:05.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004863, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004863, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004863, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004863, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 12:01:07.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004863, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004863, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004863, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004863, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 25 12:01:10.655: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 12:01:10.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:01:11.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-5327" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:9.251 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":245,"skipped":4062,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:01:11.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288
STEP: creating an pod
May 25 12:01:12.360: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-5498 -- logs-generator --log-lines-total 100 --run-duration 20s'
May 25 12:01:12.578: INFO: stderr: ""
May 25 12:01:12.578: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Waiting for log generator to start.
May 25 12:01:12.578: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
May 25 12:01:12.578: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5498" to be "running and ready, or succeeded"
May 25 12:01:12.655: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 76.867989ms
May 25 12:01:14.660: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081385506s
May 25 12:01:16.665: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.086594065s
May 25 12:01:16.665: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
May 25 12:01:16.665: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
May 25 12:01:16.665: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5498'
May 25 12:01:16.766: INFO: stderr: ""
May 25 12:01:16.767: INFO: stdout: "I0525 12:01:15.433974       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/lpz 579\nI0525 12:01:15.634260       1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/whp 285\nI0525 12:01:15.834203       1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/7hp 248\nI0525 12:01:16.034274       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/5l5g 269\nI0525 12:01:16.234195       1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/lhg8 581\nI0525 12:01:16.434256       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/9ls 364\nI0525 12:01:16.634194       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/mk2h 364\n"
STEP: limiting log lines
May 25 12:01:16.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5498 --tail=1'
May 25 12:01:16.872: INFO: stderr: ""
May 25 12:01:16.872: INFO: stdout: "I0525 12:01:16.834129       1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/pkq 353\n"
May 25 12:01:16.872: INFO: got output "I0525 12:01:16.834129       1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/pkq 353\n"
STEP: limiting log bytes
May 25 12:01:16.873: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5498 --limit-bytes=1'
May 25 12:01:17.006: INFO: stderr: ""
May 25 12:01:17.006: INFO: stdout: "I"
May 25 12:01:17.006: INFO: got output "I"
STEP: exposing timestamps
May 25 12:01:17.006: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5498 --tail=1 --timestamps'
May 25 12:01:17.119: INFO: stderr: ""
May 25 12:01:17.119: INFO: stdout: "2020-05-25T12:01:17.034344747Z I0525 12:01:17.034179       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/5xbh 447\n"
May 25 12:01:17.119: INFO: got output "2020-05-25T12:01:17.034344747Z I0525 12:01:17.034179       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/5xbh 447\n"
STEP: restricting to a time range
May 25 12:01:19.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5498 --since=1s'
May 25 12:01:19.720: INFO: stderr: ""
May 25 12:01:19.720: INFO: stdout: "I0525 12:01:18.834137       1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/cws 276\nI0525 12:01:19.034198       1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/5mf 509\nI0525 12:01:19.234143       1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/xr6n 333\nI0525 12:01:19.434224       1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/2fhb 598\nI0525 12:01:19.634134       1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/lb5 319\n"
May 25 12:01:19.720: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5498 --since=24h'
May 25 12:01:19.815: INFO: stderr: ""
May 25 12:01:19.815: INFO: stdout: "I0525 12:01:15.433974       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/lpz 579\nI0525 12:01:15.634260       1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/whp 285\nI0525 12:01:15.834203       1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/7hp 248\nI0525 12:01:16.034274       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/5l5g 269\nI0525 12:01:16.234195       1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/lhg8 581\nI0525 12:01:16.434256       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/9ls 364\nI0525 12:01:16.634194       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/mk2h 364\nI0525 12:01:16.834129       1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/pkq 353\nI0525 12:01:17.034179       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/5xbh 447\nI0525 12:01:17.234225       1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/b8qm 439\nI0525 12:01:17.434152       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/29dx 202\nI0525 12:01:17.634134       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/hc6 296\nI0525 12:01:17.834181       1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/glkr 354\nI0525 12:01:18.034148       1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/rr9 213\nI0525 12:01:18.234190       1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/wbzg 301\nI0525 12:01:18.434205       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/nr6 586\nI0525 12:01:18.634160       1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/8cq 571\nI0525 12:01:18.834137       1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/cws 276\nI0525 12:01:19.034198       1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/5mf 509\nI0525 12:01:19.234143       1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/xr6n 333\nI0525 12:01:19.434224       1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/2fhb 598\nI0525 12:01:19.634134       1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/lb5 319\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294
May 25 12:01:19.815: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5498'
May 25 12:01:22.314: INFO: stderr: ""
May 25 12:01:22.314: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:01:22.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5498" for this suite.

• [SLOW TEST:10.350 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":275,"completed":246,"skipped":4067,"failed":0}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:01:22.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-f8c60914-dcce-4e8a-9659-25300f804722
STEP: Creating a pod to test consume secrets
May 25 12:01:22.408: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cd89c89c-8d27-47b6-a74e-f2de1c3544a5" in namespace "projected-1509" to be "Succeeded or Failed"
May 25 12:01:22.415: INFO: Pod "pod-projected-secrets-cd89c89c-8d27-47b6-a74e-f2de1c3544a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.962736ms
May 25 12:01:24.421: INFO: Pod "pod-projected-secrets-cd89c89c-8d27-47b6-a74e-f2de1c3544a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012230608s
May 25 12:01:26.424: INFO: Pod "pod-projected-secrets-cd89c89c-8d27-47b6-a74e-f2de1c3544a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016034093s
STEP: Saw pod success
May 25 12:01:26.424: INFO: Pod "pod-projected-secrets-cd89c89c-8d27-47b6-a74e-f2de1c3544a5" satisfied condition "Succeeded or Failed"
May 25 12:01:26.427: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-cd89c89c-8d27-47b6-a74e-f2de1c3544a5 container projected-secret-volume-test: 
STEP: delete the pod
May 25 12:01:26.495: INFO: Waiting for pod pod-projected-secrets-cd89c89c-8d27-47b6-a74e-f2de1c3544a5 to disappear
May 25 12:01:26.545: INFO: Pod pod-projected-secrets-cd89c89c-8d27-47b6-a74e-f2de1c3544a5 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:01:26.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1509" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4069,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:01:26.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May 25 12:01:26.722: INFO: Waiting up to 5m0s for pod "downward-api-34277f89-41d9-4198-9367-8356f48e8dd0" in namespace "downward-api-7536" to be "Succeeded or Failed"
May 25 12:01:26.725: INFO: Pod "downward-api-34277f89-41d9-4198-9367-8356f48e8dd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.822313ms
May 25 12:01:28.729: INFO: Pod "downward-api-34277f89-41d9-4198-9367-8356f48e8dd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006998159s
May 25 12:01:30.739: INFO: Pod "downward-api-34277f89-41d9-4198-9367-8356f48e8dd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016559283s
STEP: Saw pod success
May 25 12:01:30.739: INFO: Pod "downward-api-34277f89-41d9-4198-9367-8356f48e8dd0" satisfied condition "Succeeded or Failed"
May 25 12:01:30.742: INFO: Trying to get logs from node kali-worker2 pod downward-api-34277f89-41d9-4198-9367-8356f48e8dd0 container dapi-container: 
STEP: delete the pod
May 25 12:01:30.762: INFO: Waiting for pod downward-api-34277f89-41d9-4198-9367-8356f48e8dd0 to disappear
May 25 12:01:30.779: INFO: Pod downward-api-34277f89-41d9-4198-9367-8356f48e8dd0 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:01:30.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7536" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4085,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:01:30.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:02:02.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-882" for this suite.
STEP: Destroying namespace "nsdeletetest-4890" for this suite.
May 25 12:02:02.462: INFO: Namespace nsdeletetest-4890 was already deleted
STEP: Destroying namespace "nsdeletetest-9139" for this suite.

• [SLOW TEST:31.679 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":249,"skipped":4113,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:02:02.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-018462d4-fbfc-478a-b977-d94b47578612
STEP: Creating a pod to test consume secrets
May 25 12:02:02.571: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ca3c3aaa-ef2a-4f42-a74f-dabaf19b98a2" in namespace "projected-37" to be "Succeeded or Failed"
May 25 12:02:02.574: INFO: Pod "pod-projected-secrets-ca3c3aaa-ef2a-4f42-a74f-dabaf19b98a2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.151491ms
May 25 12:02:04.579: INFO: Pod "pod-projected-secrets-ca3c3aaa-ef2a-4f42-a74f-dabaf19b98a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007645252s
May 25 12:02:06.583: INFO: Pod "pod-projected-secrets-ca3c3aaa-ef2a-4f42-a74f-dabaf19b98a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012410814s
STEP: Saw pod success
May 25 12:02:06.583: INFO: Pod "pod-projected-secrets-ca3c3aaa-ef2a-4f42-a74f-dabaf19b98a2" satisfied condition "Succeeded or Failed"
May 25 12:02:06.587: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-ca3c3aaa-ef2a-4f42-a74f-dabaf19b98a2 container projected-secret-volume-test: 
STEP: delete the pod
May 25 12:02:06.633: INFO: Waiting for pod pod-projected-secrets-ca3c3aaa-ef2a-4f42-a74f-dabaf19b98a2 to disappear
May 25 12:02:06.651: INFO: Pod pod-projected-secrets-ca3c3aaa-ef2a-4f42-a74f-dabaf19b98a2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:02:06.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-37" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4136,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:02:06.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 12:02:10.919: INFO: Waiting up to 5m0s for pod "client-envvars-a33bc02f-7191-4179-a555-56e4b0934a53" in namespace "pods-7514" to be "Succeeded or Failed"
May 25 12:02:10.960: INFO: Pod "client-envvars-a33bc02f-7191-4179-a555-56e4b0934a53": Phase="Pending", Reason="", readiness=false. Elapsed: 41.151944ms
May 25 12:02:12.965: INFO: Pod "client-envvars-a33bc02f-7191-4179-a555-56e4b0934a53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045954986s
May 25 12:02:14.971: INFO: Pod "client-envvars-a33bc02f-7191-4179-a555-56e4b0934a53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052143085s
STEP: Saw pod success
May 25 12:02:14.971: INFO: Pod "client-envvars-a33bc02f-7191-4179-a555-56e4b0934a53" satisfied condition "Succeeded or Failed"
May 25 12:02:14.975: INFO: Trying to get logs from node kali-worker pod client-envvars-a33bc02f-7191-4179-a555-56e4b0934a53 container env3cont: 
STEP: delete the pod
May 25 12:02:14.990: INFO: Waiting for pod client-envvars-a33bc02f-7191-4179-a555-56e4b0934a53 to disappear
May 25 12:02:14.995: INFO: Pod client-envvars-a33bc02f-7191-4179-a555-56e4b0934a53 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:02:14.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7514" for this suite.

• [SLOW TEST:8.346 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4172,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:02:15.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating replication controller my-hostname-basic-bf620ddb-0044-48e8-bc56-ec7edeb35ebd
May 25 12:02:15.112: INFO: Pod name my-hostname-basic-bf620ddb-0044-48e8-bc56-ec7edeb35ebd: Found 0 pods out of 1
May 25 12:02:20.130: INFO: Pod name my-hostname-basic-bf620ddb-0044-48e8-bc56-ec7edeb35ebd: Found 1 pods out of 1
May 25 12:02:20.130: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-bf620ddb-0044-48e8-bc56-ec7edeb35ebd" are running
May 25 12:02:20.148: INFO: Pod "my-hostname-basic-bf620ddb-0044-48e8-bc56-ec7edeb35ebd-9c2xf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 12:02:15 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 12:02:18 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 12:02:18 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 12:02:15 +0000 UTC Reason: Message:}])
May 25 12:02:20.148: INFO: Trying to dial the pod
May 25 12:02:25.160: INFO: Controller my-hostname-basic-bf620ddb-0044-48e8-bc56-ec7edeb35ebd: Got expected result from replica 1 [my-hostname-basic-bf620ddb-0044-48e8-bc56-ec7edeb35ebd-9c2xf]: "my-hostname-basic-bf620ddb-0044-48e8-bc56-ec7edeb35ebd-9c2xf", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:02:25.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4363" for this suite.

• [SLOW TEST:10.146 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":252,"skipped":4205,"failed":0}
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:02:25.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-fcada205-d6e3-4f0d-8256-612d099595a3
STEP: Creating a pod to test consume secrets
May 25 12:02:25.310: INFO: Waiting up to 5m0s for pod "pod-secrets-1d7b2907-7372-4790-8dca-6a9e65ab60d4" in namespace "secrets-6360" to be "Succeeded or Failed"
May 25 12:02:25.326: INFO: Pod "pod-secrets-1d7b2907-7372-4790-8dca-6a9e65ab60d4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.284543ms
May 25 12:02:27.360: INFO: Pod "pod-secrets-1d7b2907-7372-4790-8dca-6a9e65ab60d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050458375s
May 25 12:02:29.364: INFO: Pod "pod-secrets-1d7b2907-7372-4790-8dca-6a9e65ab60d4": Phase="Running", Reason="", readiness=true. Elapsed: 4.05458833s
May 25 12:02:31.369: INFO: Pod "pod-secrets-1d7b2907-7372-4790-8dca-6a9e65ab60d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058865185s
STEP: Saw pod success
May 25 12:02:31.369: INFO: Pod "pod-secrets-1d7b2907-7372-4790-8dca-6a9e65ab60d4" satisfied condition "Succeeded or Failed"
May 25 12:02:31.371: INFO: Trying to get logs from node kali-worker pod pod-secrets-1d7b2907-7372-4790-8dca-6a9e65ab60d4 container secret-volume-test: 
STEP: delete the pod
May 25 12:02:31.477: INFO: Waiting for pod pod-secrets-1d7b2907-7372-4790-8dca-6a9e65ab60d4 to disappear
May 25 12:02:31.541: INFO: Pod pod-secrets-1d7b2907-7372-4790-8dca-6a9e65ab60d4 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:02:31.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6360" for this suite.

• [SLOW TEST:6.382 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4208,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:02:31.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-f8b1dcfa-4f2b-43a9-99d9-0fb0fb1b28e8
STEP: Creating a pod to test consume configMaps
May 25 12:02:32.021: INFO: Waiting up to 5m0s for pod "pod-configmaps-417e3e18-7240-4514-8b82-d203ccc07a3e" in namespace "configmap-4647" to be "Succeeded or Failed"
May 25 12:02:32.086: INFO: Pod "pod-configmaps-417e3e18-7240-4514-8b82-d203ccc07a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 65.198318ms
May 25 12:02:34.140: INFO: Pod "pod-configmaps-417e3e18-7240-4514-8b82-d203ccc07a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118548368s
May 25 12:02:36.194: INFO: Pod "pod-configmaps-417e3e18-7240-4514-8b82-d203ccc07a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17323242s
May 25 12:02:38.199: INFO: Pod "pod-configmaps-417e3e18-7240-4514-8b82-d203ccc07a3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.17797427s
STEP: Saw pod success
May 25 12:02:38.199: INFO: Pod "pod-configmaps-417e3e18-7240-4514-8b82-d203ccc07a3e" satisfied condition "Succeeded or Failed"
May 25 12:02:38.202: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-417e3e18-7240-4514-8b82-d203ccc07a3e container configmap-volume-test: 
STEP: delete the pod
May 25 12:02:38.245: INFO: Waiting for pod pod-configmaps-417e3e18-7240-4514-8b82-d203ccc07a3e to disappear
May 25 12:02:38.251: INFO: Pod pod-configmaps-417e3e18-7240-4514-8b82-d203ccc07a3e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:02:38.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4647" for this suite.

• [SLOW TEST:6.733 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4239,"failed":0}
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:02:38.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-cd9206b0-b3cc-4f37-a471-fc8e464a9d57
STEP: Creating configMap with name cm-test-opt-upd-f3b2cf76-b7a8-45c9-b179-84452af6f09a
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-cd9206b0-b3cc-4f37-a471-fc8e464a9d57
STEP: Updating configmap cm-test-opt-upd-f3b2cf76-b7a8-45c9-b179-84452af6f09a
STEP: Creating configMap with name cm-test-opt-create-a399a500-0bba-45f0-b12b-4000881de3be
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:02:47.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-380" for this suite.

• [SLOW TEST:9.036 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":255,"skipped":4239,"failed":0}
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:02:47.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 12:02:48.458: INFO: Creating deployment "test-recreate-deployment"
May 25 12:02:48.734: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
May 25 12:02:49.130: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
May 25 12:02:51.380: INFO: Waiting deployment "test-recreate-deployment" to complete
May 25 12:02:51.383: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004969, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004969, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004970, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004968, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 12:02:53.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004969, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004969, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004970, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004968, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 12:02:55.810: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004969, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004969, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004970, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726004968, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 25 12:02:57.901: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
May 25 12:02:57.977: INFO: Updating deployment test-recreate-deployment
May 25 12:02:57.977: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May 25 12:02:59.764: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-1405 /apis/apps/v1/namespaces/deployment-1405/deployments/test-recreate-deployment 86c46cdc-17d7-4733-9321-d3d2a6bb6fd3 7188370 2 2020-05-25 12:02:48 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-05-25 12:02:57 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-25 12:02:59 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038005b8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-25 12:02:59 +0000 UTC,LastTransitionTime:2020-05-25 12:02:59 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-25 12:02:59 +0000 UTC,LastTransitionTime:2020-05-25 12:02:48 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

May 25 12:02:59.769: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7  deployment-1405 /apis/apps/v1/namespaces/deployment-1405/replicasets/test-recreate-deployment-d5667d9c7 321fe260-4356-4529-afbd-b60a76194ded 7188369 1 2020-05-25 12:02:58 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 86c46cdc-17d7-4733-9321-d3d2a6bb6fd3 0xc003801010 0xc003801011}] []  [{kube-controller-manager Update apps/v1 2020-05-25 12:02:59 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 54 99 52 54 99 100 99 45 49 55 100 55 45 52 55 51 51 45 57 51 50 49 45 100 51 100 50 97 54 98 98 54 102 100 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038010d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May 25 12:02:59.769: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
May 25 12:02:59.769: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c  deployment-1405 /apis/apps/v1/namespaces/deployment-1405/replicasets/test-recreate-deployment-74d98b5f7c 35ea9c79-7d12-4799-ac46-55167784e588 7188356 2 2020-05-25 12:02:48 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 86c46cdc-17d7-4733-9321-d3d2a6bb6fd3 0xc003800e47 0xc003800e48}] []  [{kube-controller-manager Update apps/v1 2020-05-25 12:02:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 54 99 52 54 99 100 99 45 49 55 100 55 45 52 55 51 51 45 57 51 50 49 45 100 51 100 50 97 54 98 98 54 102 100 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003800f58  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May 25 12:02:59.799: INFO: Pod "test-recreate-deployment-d5667d9c7-nldpw" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-nldpw test-recreate-deployment-d5667d9c7- deployment-1405 /api/v1/namespaces/deployment-1405/pods/test-recreate-deployment-d5667d9c7-nldpw d3953555-18b5-4466-8f95-ba37bcb972b2 7188367 0 2020-05-25 12:02:58 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 321fe260-4356-4529-afbd-b60a76194ded 0xc0038304e0 0xc0038304e1}] []  [{kube-controller-manager Update v1 2020-05-25 12:02:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 50 49 102 101 50 54 48 45 52 51 53 54 45 52 53 50 57 45 97 102 98 100 45 98 54 48 97 55 54 49 57 52 100 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-25 12:02:59 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qc5t2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qc5t2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qc5t2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 12:02:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 12:02:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 12:02:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 12:02:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-25 12:02:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:02:59.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1405" for this suite.

• [SLOW TEST:12.487 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":256,"skipped":4239,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:02:59.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
May 25 12:02:59.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
May 25 12:03:11.533: INFO: >>> kubeConfig: /root/.kube/config
May 25 12:03:13.463: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:03:24.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7751" for this suite.

• [SLOW TEST:24.343 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":257,"skipped":4262,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:03:24.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
May 25 12:03:24.240: INFO: Waiting up to 5m0s for pod "pod-e82ff281-1740-4838-bb22-8173b19101b1" in namespace "emptydir-2264" to be "Succeeded or Failed"
May 25 12:03:24.275: INFO: Pod "pod-e82ff281-1740-4838-bb22-8173b19101b1": Phase="Pending", Reason="", readiness=false. Elapsed: 34.789864ms
May 25 12:03:26.279: INFO: Pod "pod-e82ff281-1740-4838-bb22-8173b19101b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038622601s
May 25 12:03:28.284: INFO: Pod "pod-e82ff281-1740-4838-bb22-8173b19101b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04357607s
May 25 12:03:30.288: INFO: Pod "pod-e82ff281-1740-4838-bb22-8173b19101b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047734759s
STEP: Saw pod success
May 25 12:03:30.288: INFO: Pod "pod-e82ff281-1740-4838-bb22-8173b19101b1" satisfied condition "Succeeded or Failed"
May 25 12:03:30.291: INFO: Trying to get logs from node kali-worker pod pod-e82ff281-1740-4838-bb22-8173b19101b1 container test-container: 
STEP: delete the pod
May 25 12:03:30.328: INFO: Waiting for pod pod-e82ff281-1740-4838-bb22-8173b19101b1 to disappear
May 25 12:03:30.380: INFO: Pod pod-e82ff281-1740-4838-bb22-8173b19101b1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:03:30.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2264" for this suite.

• [SLOW TEST:6.238 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":258,"skipped":4338,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:03:30.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-1651
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating stateful set ss in namespace statefulset-1651
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1651
May 25 12:03:30.558: INFO: Found 0 stateful pods, waiting for 1
May 25 12:03:40.562: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
May 25 12:03:40.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 25 12:03:40.842: INFO: stderr: "I0525 12:03:40.707889    2869 log.go:172] (0xc000b1b290) (0xc000bc03c0) Create stream\nI0525 12:03:40.707967    2869 log.go:172] (0xc000b1b290) (0xc000bc03c0) Stream added, broadcasting: 1\nI0525 12:03:40.710861    2869 log.go:172] (0xc000b1b290) Reply frame received for 1\nI0525 12:03:40.710908    2869 log.go:172] (0xc000b1b290) (0xc000bc0460) Create stream\nI0525 12:03:40.710922    2869 log.go:172] (0xc000b1b290) (0xc000bc0460) Stream added, broadcasting: 3\nI0525 12:03:40.711763    2869 log.go:172] (0xc000b1b290) Reply frame received for 3\nI0525 12:03:40.711803    2869 log.go:172] (0xc000b1b290) (0xc000b12280) Create stream\nI0525 12:03:40.711818    2869 log.go:172] (0xc000b1b290) (0xc000b12280) Stream added, broadcasting: 5\nI0525 12:03:40.712700    2869 log.go:172] (0xc000b1b290) Reply frame received for 5\nI0525 12:03:40.798786    2869 log.go:172] (0xc000b1b290) Data frame received for 5\nI0525 12:03:40.798810    2869 log.go:172] (0xc000b12280) (5) Data frame handling\nI0525 12:03:40.798826    2869 log.go:172] (0xc000b12280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 12:03:40.823930    2869 log.go:172] (0xc000b1b290) Data frame received for 3\nI0525 12:03:40.823947    2869 log.go:172] (0xc000bc0460) (3) Data frame handling\nI0525 12:03:40.823962    2869 log.go:172] (0xc000bc0460) (3) Data frame sent\nI0525 12:03:40.823991    2869 log.go:172] (0xc000b1b290) Data frame received for 5\nI0525 12:03:40.824026    2869 log.go:172] (0xc000b12280) (5) Data frame handling\nI0525 12:03:40.824058    2869 log.go:172] (0xc000b1b290) Data frame received for 3\nI0525 12:03:40.824272    2869 log.go:172] (0xc000bc0460) (3) Data frame handling\nI0525 12:03:40.836529    2869 log.go:172] (0xc000b1b290) Data frame received for 1\nI0525 12:03:40.836641    2869 log.go:172] (0xc000bc03c0) (1) Data frame handling\nI0525 12:03:40.836790    2869 log.go:172] (0xc000bc03c0) (1) Data frame sent\nI0525 12:03:40.837326    2869 log.go:172] (0xc000b1b290) (0xc000bc03c0) Stream removed, broadcasting: 1\nI0525 12:03:40.837654    2869 log.go:172] (0xc000b1b290) Go away received\nI0525 12:03:40.837877    2869 log.go:172] (0xc000b1b290) (0xc000bc03c0) Stream removed, broadcasting: 1\nI0525 12:03:40.837947    2869 log.go:172] (0xc000b1b290) (0xc000bc0460) Stream removed, broadcasting: 3\nI0525 12:03:40.837983    2869 log.go:172] (0xc000b1b290) (0xc000b12280) Stream removed, broadcasting: 5\n"
May 25 12:03:40.842: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 25 12:03:40.842: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 25 12:03:40.845: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
May 25 12:03:50.850: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
May 25 12:03:50.850: INFO: Waiting for statefulset status.replicas updated to 0
May 25 12:03:50.949: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
May 25 12:03:50.949: INFO: ss-0  kali-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  }]
May 25 12:03:50.949: INFO: 
May 25 12:03:50.949: INFO: StatefulSet ss has not reached scale 3, at 1
May 25 12:03:51.955: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.910096132s
May 25 12:03:53.178: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.90481737s
May 25 12:03:54.429: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.681787746s
May 25 12:03:55.434: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.429901255s
May 25 12:03:56.440: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.425327363s
May 25 12:03:57.444: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.419804664s
May 25 12:03:58.449: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.415766091s
May 25 12:03:59.454: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.410309125s
May 25 12:04:00.460: INFO: Verifying statefulset ss doesn't scale past 3 for another 405.24557ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1651
May 25 12:04:01.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:04:01.700: INFO: stderr: "I0525 12:04:01.603925    2889 log.go:172] (0xc000aa4000) (0xc000800000) Create stream\nI0525 12:04:01.604023    2889 log.go:172] (0xc000aa4000) (0xc000800000) Stream added, broadcasting: 1\nI0525 12:04:01.608354    2889 log.go:172] (0xc000aa4000) Reply frame received for 1\nI0525 12:04:01.608397    2889 log.go:172] (0xc000aa4000) (0xc0008000a0) Create stream\nI0525 12:04:01.608416    2889 log.go:172] (0xc000aa4000) (0xc0008000a0) Stream added, broadcasting: 3\nI0525 12:04:01.609811    2889 log.go:172] (0xc000aa4000) Reply frame received for 3\nI0525 12:04:01.609853    2889 log.go:172] (0xc000aa4000) (0xc000852000) Create stream\nI0525 12:04:01.609868    2889 log.go:172] (0xc000aa4000) (0xc000852000) Stream added, broadcasting: 5\nI0525 12:04:01.611320    2889 log.go:172] (0xc000aa4000) Reply frame received for 5\nI0525 12:04:01.693313    2889 log.go:172] (0xc000aa4000) Data frame received for 5\nI0525 12:04:01.693344    2889 log.go:172] (0xc000852000) (5) Data frame handling\nI0525 12:04:01.693354    2889 log.go:172] (0xc000852000) (5) Data frame sent\nI0525 12:04:01.693360    2889 log.go:172] (0xc000aa4000) Data frame received for 5\nI0525 12:04:01.693364    2889 log.go:172] (0xc000852000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 12:04:01.693380    2889 log.go:172] (0xc000aa4000) Data frame received for 3\nI0525 12:04:01.693384    2889 log.go:172] (0xc0008000a0) (3) Data frame handling\nI0525 12:04:01.693390    2889 log.go:172] (0xc0008000a0) (3) Data frame sent\nI0525 12:04:01.693397    2889 log.go:172] (0xc000aa4000) Data frame received for 3\nI0525 12:04:01.693401    2889 log.go:172] (0xc0008000a0) (3) Data frame handling\nI0525 12:04:01.695242    2889 log.go:172] (0xc000aa4000) Data frame received for 1\nI0525 12:04:01.695261    2889 log.go:172] (0xc000800000) (1) Data frame handling\nI0525 12:04:01.695268    2889 log.go:172] (0xc000800000) (1) Data frame sent\nI0525 12:04:01.695290    2889 log.go:172] (0xc000aa4000) (0xc000800000) Stream removed, broadcasting: 1\nI0525 12:04:01.695412    2889 log.go:172] (0xc000aa4000) Go away received\nI0525 12:04:01.695562    2889 log.go:172] (0xc000aa4000) (0xc000800000) Stream removed, broadcasting: 1\nI0525 12:04:01.695573    2889 log.go:172] (0xc000aa4000) (0xc0008000a0) Stream removed, broadcasting: 3\nI0525 12:04:01.695579    2889 log.go:172] (0xc000aa4000) (0xc000852000) Stream removed, broadcasting: 5\n"
May 25 12:04:01.700: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 25 12:04:01.700: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 25 12:04:01.700: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:04:01.918: INFO: stderr: "I0525 12:04:01.846509    2909 log.go:172] (0xc0000e8370) (0xc000843360) Create stream\nI0525 12:04:01.846575    2909 log.go:172] (0xc0000e8370) (0xc000843360) Stream added, broadcasting: 1\nI0525 12:04:01.848252    2909 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0525 12:04:01.848302    2909 log.go:172] (0xc0000e8370) (0xc0002f0000) Create stream\nI0525 12:04:01.848316    2909 log.go:172] (0xc0000e8370) (0xc0002f0000) Stream added, broadcasting: 3\nI0525 12:04:01.849421    2909 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0525 12:04:01.849473    2909 log.go:172] (0xc0000e8370) (0xc0003ce000) Create stream\nI0525 12:04:01.849486    2909 log.go:172] (0xc0000e8370) (0xc0003ce000) Stream added, broadcasting: 5\nI0525 12:04:01.850326    2909 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0525 12:04:01.912413    2909 log.go:172] (0xc0000e8370) Data frame received for 3\nI0525 12:04:01.912457    2909 log.go:172] (0xc0002f0000) (3) Data frame handling\nI0525 12:04:01.912484    2909 log.go:172] (0xc0002f0000) (3) Data frame sent\nI0525 12:04:01.912518    2909 log.go:172] (0xc0000e8370) Data frame received for 3\nI0525 12:04:01.912554    2909 log.go:172] (0xc0002f0000) (3) Data frame handling\nI0525 12:04:01.912589    2909 log.go:172] (0xc0000e8370) Data frame received for 5\nI0525 12:04:01.912630    2909 log.go:172] (0xc0003ce000) (5) Data frame handling\nI0525 12:04:01.912662    2909 log.go:172] (0xc0003ce000) (5) Data frame sent\nI0525 12:04:01.912690    2909 log.go:172] (0xc0000e8370) Data frame received for 5\nI0525 12:04:01.912707    2909 log.go:172] (0xc0003ce000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0525 12:04:01.914431    2909 log.go:172] (0xc0000e8370) Data frame received for 1\nI0525 12:04:01.914496    2909 log.go:172] (0xc000843360) (1) Data frame handling\nI0525 12:04:01.914526    2909 log.go:172] (0xc000843360) (1) Data frame sent\nI0525 12:04:01.914545    2909 log.go:172] (0xc0000e8370) (0xc000843360) Stream removed, broadcasting: 1\nI0525 12:04:01.914707    2909 log.go:172] (0xc0000e8370) Go away received\nI0525 12:04:01.914878    2909 log.go:172] (0xc0000e8370) (0xc000843360) Stream removed, broadcasting: 1\nI0525 12:04:01.914898    2909 log.go:172] (0xc0000e8370) (0xc0002f0000) Stream removed, broadcasting: 3\nI0525 12:04:01.914909    2909 log.go:172] (0xc0000e8370) (0xc0003ce000) Stream removed, broadcasting: 5\n"
May 25 12:04:01.918: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 25 12:04:01.918: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 25 12:04:01.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:04:02.114: INFO: stderr: "I0525 12:04:02.045759    2931 log.go:172] (0xc000996000) (0xc0003ecd20) Create stream\nI0525 12:04:02.045836    2931 log.go:172] (0xc000996000) (0xc0003ecd20) Stream added, broadcasting: 1\nI0525 12:04:02.048304    2931 log.go:172] (0xc000996000) Reply frame received for 1\nI0525 12:04:02.048348    2931 log.go:172] (0xc000996000) (0xc0008fe000) Create stream\nI0525 12:04:02.048360    2931 log.go:172] (0xc000996000) (0xc0008fe000) Stream added, broadcasting: 3\nI0525 12:04:02.049702    2931 log.go:172] (0xc000996000) Reply frame received for 3\nI0525 12:04:02.049747    2931 log.go:172] (0xc000996000) (0xc0009bc000) Create stream\nI0525 12:04:02.049767    2931 log.go:172] (0xc000996000) (0xc0009bc000) Stream added, broadcasting: 5\nI0525 12:04:02.050776    2931 log.go:172] (0xc000996000) Reply frame received for 5\nI0525 12:04:02.106327    2931 log.go:172] (0xc000996000) Data frame received for 5\nI0525 12:04:02.106384    2931 log.go:172] (0xc0009bc000) (5) Data frame handling\nI0525 12:04:02.106406    2931 log.go:172] (0xc0009bc000) (5) Data frame sent\nI0525 12:04:02.106423    2931 log.go:172] (0xc000996000) Data frame received for 5\nI0525 12:04:02.106437    2931 log.go:172] (0xc0009bc000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0525 12:04:02.106476    2931 log.go:172] (0xc000996000) Data frame received for 3\nI0525 12:04:02.106514    2931 log.go:172] (0xc0008fe000) (3) Data frame handling\nI0525 12:04:02.106530    2931 log.go:172] (0xc0008fe000) (3) Data frame sent\nI0525 12:04:02.106542    2931 log.go:172] (0xc000996000) Data frame received for 3\nI0525 12:04:02.106547    2931 log.go:172] (0xc0008fe000) (3) Data frame handling\nI0525 12:04:02.108127    2931 log.go:172] (0xc000996000) Data frame received for 1\nI0525 12:04:02.108162    2931 log.go:172] (0xc0003ecd20) (1) Data frame handling\nI0525 12:04:02.108190    2931 log.go:172] (0xc0003ecd20) (1) Data frame sent\nI0525 12:04:02.108223    2931 log.go:172] (0xc000996000) (0xc0003ecd20) Stream removed, broadcasting: 1\nI0525 12:04:02.108248    2931 log.go:172] (0xc000996000) Go away received\nI0525 12:04:02.108685    2931 log.go:172] (0xc000996000) (0xc0003ecd20) Stream removed, broadcasting: 1\nI0525 12:04:02.108723    2931 log.go:172] (0xc000996000) (0xc0008fe000) Stream removed, broadcasting: 3\nI0525 12:04:02.108810    2931 log.go:172] (0xc000996000) (0xc0009bc000) Stream removed, broadcasting: 5\n"
May 25 12:04:02.115: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 25 12:04:02.115: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 25 12:04:02.127: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
May 25 12:04:02.127: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
May 25 12:04:02.127: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
May 25 12:04:02.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 25 12:04:02.331: INFO: stderr: "I0525 12:04:02.272656    2951 log.go:172] (0xc0000e00b0) (0xc000522be0) Create stream\nI0525 12:04:02.272724    2951 log.go:172] (0xc0000e00b0) (0xc000522be0) Stream added, broadcasting: 1\nI0525 12:04:02.275024    2951 log.go:172] (0xc0000e00b0) Reply frame received for 1\nI0525 12:04:02.275070    2951 log.go:172] (0xc0000e00b0) (0xc000acc000) Create stream\nI0525 12:04:02.275083    2951 log.go:172] (0xc0000e00b0) (0xc000acc000) Stream added, broadcasting: 3\nI0525 12:04:02.275929    2951 log.go:172] (0xc0000e00b0) Reply frame received for 3\nI0525 12:04:02.275965    2951 log.go:172] (0xc0000e00b0) (0xc000acc0a0) Create stream\nI0525 12:04:02.275987    2951 log.go:172] (0xc0000e00b0) (0xc000acc0a0) Stream added, broadcasting: 5\nI0525 12:04:02.276872    2951 log.go:172] (0xc0000e00b0) Reply frame received for 5\nI0525 12:04:02.327138    2951 log.go:172] (0xc0000e00b0) Data frame received for 3\nI0525 12:04:02.327171    2951 log.go:172] (0xc000acc000) (3) Data frame handling\nI0525 12:04:02.327183    2951 log.go:172] (0xc000acc000) (3) Data frame sent\nI0525 12:04:02.327192    2951 log.go:172] (0xc0000e00b0) Data frame received for 3\nI0525 12:04:02.327199    2951 log.go:172] (0xc000acc000) (3) Data frame handling\nI0525 12:04:02.327226    2951 log.go:172] (0xc0000e00b0) Data frame received for 5\nI0525 12:04:02.327235    2951 log.go:172] (0xc000acc0a0) (5) Data frame handling\nI0525 12:04:02.327248    2951 log.go:172] (0xc000acc0a0) (5) Data frame sent\nI0525 12:04:02.327257    2951 log.go:172] (0xc0000e00b0) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 12:04:02.327264    2951 log.go:172] (0xc000acc0a0) (5) Data frame handling\nI0525 12:04:02.328279    2951 log.go:172] (0xc0000e00b0) Data frame received for 1\nI0525 12:04:02.328300    2951 log.go:172] (0xc000522be0) (1) Data frame handling\nI0525 12:04:02.328310    2951 log.go:172] (0xc000522be0) (1) Data frame sent\nI0525 12:04:02.328321    2951 log.go:172] (0xc0000e00b0) (0xc000522be0) Stream removed, broadcasting: 1\nI0525 12:04:02.328398    2951 log.go:172] (0xc0000e00b0) Go away received\nI0525 12:04:02.328573    2951 log.go:172] (0xc0000e00b0) (0xc000522be0) Stream removed, broadcasting: 1\nI0525 12:04:02.328584    2951 log.go:172] (0xc0000e00b0) (0xc000acc000) Stream removed, broadcasting: 3\nI0525 12:04:02.328590    2951 log.go:172] (0xc0000e00b0) (0xc000acc0a0) Stream removed, broadcasting: 5\n"
May 25 12:04:02.332: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 25 12:04:02.332: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 25 12:04:02.332: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 25 12:04:02.562: INFO: stderr: "I0525 12:04:02.441644    2973 log.go:172] (0xc0009dc000) (0xc0006937c0) Create stream\nI0525 12:04:02.441687    2973 log.go:172] (0xc0009dc000) (0xc0006937c0) Stream added, broadcasting: 1\nI0525 12:04:02.443804    2973 log.go:172] (0xc0009dc000) Reply frame received for 1\nI0525 12:04:02.443840    2973 log.go:172] (0xc0009dc000) (0xc0008f4000) Create stream\nI0525 12:04:02.443849    2973 log.go:172] (0xc0009dc000) (0xc0008f4000) Stream added, broadcasting: 3\nI0525 12:04:02.444667    2973 log.go:172] (0xc0009dc000) Reply frame received for 3\nI0525 12:04:02.444720    2973 log.go:172] (0xc0009dc000) (0xc000693860) Create stream\nI0525 12:04:02.444732    2973 log.go:172] (0xc0009dc000) (0xc000693860) Stream added, broadcasting: 5\nI0525 12:04:02.445784    2973 log.go:172] (0xc0009dc000) Reply frame received for 5\nI0525 12:04:02.521893    2973 log.go:172] (0xc0009dc000) Data frame received for 5\nI0525 12:04:02.521919    2973 log.go:172] (0xc000693860) (5) Data frame handling\nI0525 12:04:02.521934    2973 log.go:172] (0xc000693860) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 12:04:02.554348    2973 log.go:172] (0xc0009dc000) Data frame received for 3\nI0525 12:04:02.554388    2973 log.go:172] (0xc0008f4000) (3) Data frame handling\nI0525 12:04:02.554421    2973 log.go:172] (0xc0008f4000) (3) Data frame sent\nI0525 12:04:02.554438    2973 log.go:172] (0xc0009dc000) Data frame received for 3\nI0525 12:04:02.554453    2973 log.go:172] (0xc0008f4000) (3) Data frame handling\nI0525 12:04:02.554587    2973 log.go:172] (0xc0009dc000) Data frame received for 5\nI0525 12:04:02.554618    2973 log.go:172] (0xc000693860) (5) Data frame handling\nI0525 12:04:02.556305    2973 log.go:172] (0xc0009dc000) Data frame received for 1\nI0525 12:04:02.556327    2973 log.go:172] (0xc0006937c0) (1) Data frame handling\nI0525 12:04:02.556337    2973 log.go:172] (0xc0006937c0) (1) Data frame sent\nI0525 12:04:02.556373    2973 log.go:172] (0xc0009dc000) (0xc0006937c0) Stream removed, broadcasting: 1\nI0525 12:04:02.556393    2973 log.go:172] (0xc0009dc000) Go away received\nI0525 12:04:02.556789    2973 log.go:172] (0xc0009dc000) (0xc0006937c0) Stream removed, broadcasting: 1\nI0525 12:04:02.556812    2973 log.go:172] (0xc0009dc000) (0xc0008f4000) Stream removed, broadcasting: 3\nI0525 12:04:02.556826    2973 log.go:172] (0xc0009dc000) (0xc000693860) Stream removed, broadcasting: 5\n"
May 25 12:04:02.562: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 25 12:04:02.562: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 25 12:04:02.562: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 25 12:04:02.785: INFO: stderr: "I0525 12:04:02.697493    2994 log.go:172] (0xc000a39760) (0xc000aa2820) Create stream\nI0525 12:04:02.697574    2994 log.go:172] (0xc000a39760) (0xc000aa2820) Stream added, broadcasting: 1\nI0525 12:04:02.706004    2994 log.go:172] (0xc000a39760) Reply frame received for 1\nI0525 12:04:02.706058    2994 log.go:172] (0xc000a39760) (0xc0008115e0) Create stream\nI0525 12:04:02.706080    2994 log.go:172] (0xc000a39760) (0xc0008115e0) Stream added, broadcasting: 3\nI0525 12:04:02.706673    2994 log.go:172] (0xc000a39760) Reply frame received for 3\nI0525 12:04:02.706700    2994 log.go:172] (0xc000a39760) (0xc00059ca00) Create stream\nI0525 12:04:02.706711    2994 log.go:172] (0xc000a39760) (0xc00059ca00) Stream added, broadcasting: 5\nI0525 12:04:02.707280    2994 log.go:172] (0xc000a39760) Reply frame received for 5\nI0525 12:04:02.747348    2994 log.go:172] (0xc000a39760) Data frame received for 5\nI0525 12:04:02.747377    2994 log.go:172] (0xc00059ca00) (5) Data frame handling\nI0525 12:04:02.747395    2994 log.go:172] (0xc00059ca00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 12:04:02.776343    2994 log.go:172] (0xc000a39760) Data frame received for 3\nI0525 12:04:02.776369    2994 log.go:172] (0xc0008115e0) (3) Data frame handling\nI0525 12:04:02.776514    2994 log.go:172] (0xc0008115e0) (3) Data frame sent\nI0525 12:04:02.776633    2994 log.go:172] (0xc000a39760) Data frame received for 3\nI0525 12:04:02.776672    2994 log.go:172] (0xc0008115e0) (3) Data frame handling\nI0525 12:04:02.776707    2994 log.go:172] (0xc000a39760) Data frame received for 5\nI0525 12:04:02.776727    2994 log.go:172] (0xc00059ca00) (5) Data frame handling\nI0525 12:04:02.778725    2994 log.go:172] (0xc000a39760) Data frame received for 1\nI0525 12:04:02.778767    2994 log.go:172] (0xc000aa2820) (1) Data frame handling\nI0525 12:04:02.778794    2994 log.go:172] (0xc000aa2820) (1) Data frame sent\nI0525 12:04:02.778816    2994 log.go:172] (0xc000a39760) (0xc000aa2820) Stream removed, broadcasting: 1\nI0525 12:04:02.778872    2994 log.go:172] (0xc000a39760) Go away received\nI0525 12:04:02.779257    2994 log.go:172] (0xc000a39760) (0xc000aa2820) Stream removed, broadcasting: 1\nI0525 12:04:02.779279    2994 log.go:172] (0xc000a39760) (0xc0008115e0) Stream removed, broadcasting: 3\nI0525 12:04:02.779291    2994 log.go:172] (0xc000a39760) (0xc00059ca00) Stream removed, broadcasting: 5\n"
May 25 12:04:02.785: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 25 12:04:02.785: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 25 12:04:02.785: INFO: Waiting for statefulset status.replicas updated to 0
May 25 12:04:02.789: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
May 25 12:04:12.797: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
May 25 12:04:12.797: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
May 25 12:04:12.797: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
May 25 12:04:12.861: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 25 12:04:12.861: INFO: ss-0  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  }]
May 25 12:04:12.861: INFO: ss-1  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:50 +0000 UTC  }]
May 25 12:04:12.861: INFO: ss-2  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:50 +0000 UTC  }]
May 25 12:04:12.861: INFO: 
May 25 12:04:12.861: INFO: StatefulSet ss has not reached scale 0, at 3
May 25 12:04:13.884: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 25 12:04:13.884: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  }]
May 25 12:04:13.884: INFO: ss-1  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:50 +0000 UTC  }]
May 25 12:04:13.884: INFO: ss-2  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:50 +0000 UTC  }]
May 25 12:04:13.884: INFO: 
May 25 12:04:13.884: INFO: StatefulSet ss has not reached scale 0, at 3
May 25 12:04:14.888: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 25 12:04:14.888: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  }]
May 25 12:04:14.889: INFO: ss-1  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:50 +0000 UTC  }]
May 25 12:04:14.889: INFO: ss-2  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:50 +0000 UTC  }]
May 25 12:04:14.889: INFO: 
May 25 12:04:14.889: INFO: StatefulSet ss has not reached scale 0, at 3
May 25 12:04:15.893: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 25 12:04:15.893: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  }]
May 25 12:04:15.893: INFO: ss-1  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:50 +0000 UTC  }]
May 25 12:04:15.893: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:50 +0000 UTC  }]
May 25 12:04:15.893: INFO: 
May 25 12:04:15.893: INFO: StatefulSet ss has not reached scale 0, at 3
May 25 12:04:17.028: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 25 12:04:17.028: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  }]
May 25 12:04:17.028: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:50 +0000 UTC  }]
May 25 12:04:17.028: INFO: 
May 25 12:04:17.028: INFO: StatefulSet ss has not reached scale 0, at 2
May 25 12:04:18.034: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 25 12:04:18.034: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  }]
May 25 12:04:18.034: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:50 +0000 UTC  }]
May 25 12:04:18.034: INFO: 
May 25 12:04:18.034: INFO: StatefulSet ss has not reached scale 0, at 2
May 25 12:04:19.039: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 25 12:04:19.039: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  }]
May 25 12:04:19.039: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:50 +0000 UTC  }]
May 25 12:04:19.039: INFO: 
May 25 12:04:19.039: INFO: StatefulSet ss has not reached scale 0, at 2
May 25 12:04:20.045: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 25 12:04:20.045: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  }]
May 25 12:04:20.045: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:50 +0000 UTC  }]
May 25 12:04:20.045: INFO: 
May 25 12:04:20.045: INFO: StatefulSet ss has not reached scale 0, at 2
May 25 12:04:21.050: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 25 12:04:21.050: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  }]
May 25 12:04:21.050: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:50 +0000 UTC  }]
May 25 12:04:21.050: INFO: 
May 25 12:04:21.050: INFO: StatefulSet ss has not reached scale 0, at 2
May 25 12:04:22.055: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 25 12:04:22.055: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:30 +0000 UTC  }]
May 25 12:04:22.055: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:04:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 12:03:50 +0000 UTC  }]
May 25 12:04:22.056: INFO: 
May 25 12:04:22.056: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1651
May 25 12:04:23.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:04:23.201: INFO: rc: 1
May 25 12:04:23.201: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
May 25 12:04:33.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:04:33.295: INFO: rc: 1
May 25 12:04:33.295: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:04:43.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:04:43.400: INFO: rc: 1
May 25 12:04:43.401: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:04:53.401: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:04:53.516: INFO: rc: 1
May 25 12:04:53.516: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:05:03.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:05:03.618: INFO: rc: 1
May 25 12:05:03.619: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:05:13.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:05:13.725: INFO: rc: 1
May 25 12:05:13.725: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:05:23.725: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:05:23.832: INFO: rc: 1
May 25 12:05:23.832: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:05:33.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:05:33.927: INFO: rc: 1
May 25 12:05:33.927: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:05:43.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:05:44.028: INFO: rc: 1
May 25 12:05:44.028: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:05:54.029: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:05:54.143: INFO: rc: 1
May 25 12:05:54.143: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:06:04.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:06:04.243: INFO: rc: 1
May 25 12:06:04.244: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:06:14.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:06:14.342: INFO: rc: 1
May 25 12:06:14.342: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:06:24.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:06:24.439: INFO: rc: 1
May 25 12:06:24.439: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:06:34.439: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:06:34.543: INFO: rc: 1
May 25 12:06:34.543: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:06:44.544: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:06:44.644: INFO: rc: 1
May 25 12:06:44.645: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:06:54.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:06:54.741: INFO: rc: 1
May 25 12:06:54.741: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:07:04.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:07:04.849: INFO: rc: 1
May 25 12:07:04.849: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:07:14.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:07:14.944: INFO: rc: 1
May 25 12:07:14.944: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:07:24.945: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:07:25.049: INFO: rc: 1
May 25 12:07:25.049: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:07:35.049: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:07:35.142: INFO: rc: 1
May 25 12:07:35.142: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:07:45.142: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:07:45.234: INFO: rc: 1
May 25 12:07:45.234: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:07:55.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:07:55.334: INFO: rc: 1
May 25 12:07:55.334: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:08:05.335: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:08:05.431: INFO: rc: 1
May 25 12:08:05.431: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:08:15.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:08:15.525: INFO: rc: 1
May 25 12:08:15.525: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:08:25.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:08:25.612: INFO: rc: 1
May 25 12:08:25.612: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:08:35.612: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:08:35.719: INFO: rc: 1
May 25 12:08:35.719: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:08:45.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:08:45.905: INFO: rc: 1
May 25 12:08:45.906: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:08:55.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:08:55.992: INFO: rc: 1
May 25 12:08:55.992: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:09:05.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:09:06.095: INFO: rc: 1
May 25 12:09:06.095: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:09:16.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:09:16.186: INFO: rc: 1
May 25 12:09:16.186: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 25 12:09:26.186: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:09:26.292: INFO: rc: 1
May 25 12:09:26.292: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
May 25 12:09:26.292: INFO: Scaling statefulset ss to 0
May 25 12:09:26.304: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May 25 12:09:26.307: INFO: Deleting all statefulset in ns statefulset-1651
May 25 12:09:26.309: INFO: Scaling statefulset ss to 0
May 25 12:09:26.318: INFO: Waiting for statefulset status.replicas updated to 0
May 25 12:09:26.319: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:09:26.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1651" for this suite.

• [SLOW TEST:355.953 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":259,"skipped":4357,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:09:26.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May 25 12:09:30.664: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:09:30.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1305" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4373,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:09:30.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-1849/configmap-test-0a91edba-0c6d-404a-a04f-856412b86a32
STEP: Creating a pod to test consume configMaps
May 25 12:09:30.882: INFO: Waiting up to 5m0s for pod "pod-configmaps-137d136b-f3d4-4c80-995b-7e09179d9fc6" in namespace "configmap-1849" to be "Succeeded or Failed"
May 25 12:09:30.886: INFO: Pod "pod-configmaps-137d136b-f3d4-4c80-995b-7e09179d9fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.951158ms
May 25 12:09:32.964: INFO: Pod "pod-configmaps-137d136b-f3d4-4c80-995b-7e09179d9fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082438565s
May 25 12:09:34.968: INFO: Pod "pod-configmaps-137d136b-f3d4-4c80-995b-7e09179d9fc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085993118s
STEP: Saw pod success
May 25 12:09:34.968: INFO: Pod "pod-configmaps-137d136b-f3d4-4c80-995b-7e09179d9fc6" satisfied condition "Succeeded or Failed"
May 25 12:09:34.970: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-137d136b-f3d4-4c80-995b-7e09179d9fc6 container env-test: 
STEP: delete the pod
May 25 12:09:35.090: INFO: Waiting for pod pod-configmaps-137d136b-f3d4-4c80-995b-7e09179d9fc6 to disappear
May 25 12:09:35.095: INFO: Pod pod-configmaps-137d136b-f3d4-4c80-995b-7e09179d9fc6 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:09:35.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1849" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4478,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:09:35.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0525 12:09:48.777773       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 25 12:09:48.777: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:09:48.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9182" for this suite.

• [SLOW TEST:13.877 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":262,"skipped":4500,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:09:48.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0525 12:09:59.185090       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 25 12:09:59.185: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:09:59.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1628" for this suite.

• [SLOW TEST:10.215 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":263,"skipped":4523,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:09:59.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a Namespace
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:09:59.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7252" for this suite.
STEP: Destroying namespace "nspatchtest-ea5146f8-9cf6-4ad1-b9ca-60e757cee812-9700" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":264,"skipped":4534,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:09:59.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 25 12:09:59.836: INFO: Waiting up to 5m0s for pod "downwardapi-volume-637d5bb8-f288-4b29-a794-7b2462d696f5" in namespace "downward-api-5964" to be "Succeeded or Failed"
May 25 12:09:59.849: INFO: Pod "downwardapi-volume-637d5bb8-f288-4b29-a794-7b2462d696f5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.704585ms
May 25 12:10:01.934: INFO: Pod "downwardapi-volume-637d5bb8-f288-4b29-a794-7b2462d696f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09822243s
May 25 12:10:03.939: INFO: Pod "downwardapi-volume-637d5bb8-f288-4b29-a794-7b2462d696f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103034869s
STEP: Saw pod success
May 25 12:10:03.939: INFO: Pod "downwardapi-volume-637d5bb8-f288-4b29-a794-7b2462d696f5" satisfied condition "Succeeded or Failed"
May 25 12:10:03.943: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-637d5bb8-f288-4b29-a794-7b2462d696f5 container client-container: 
STEP: delete the pod
May 25 12:10:04.098: INFO: Waiting for pod downwardapi-volume-637d5bb8-f288-4b29-a794-7b2462d696f5 to disappear
May 25 12:10:04.103: INFO: Pod downwardapi-volume-637d5bb8-f288-4b29-a794-7b2462d696f5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:10:04.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5964" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4539,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:10:04.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May 25 12:10:08.710: INFO: Successfully updated pod "labelsupdate5ec09920-5543-45a4-8cc2-a36c2153c4f2"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:10:12.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3283" for this suite.

• [SLOW TEST:8.654 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":266,"skipped":4557,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:10:12.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May 25 12:10:12.859: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May 25 12:10:12.870: INFO: Waiting for terminating namespaces to be deleted...
May 25 12:10:12.872: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May 25 12:10:12.883: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 12:10:12.883: INFO: 	Container kindnet-cni ready: true, restart count 1
May 25 12:10:12.883: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 12:10:12.883: INFO: 	Container kube-proxy ready: true, restart count 0
May 25 12:10:12.883: INFO: labelsupdate5ec09920-5543-45a4-8cc2-a36c2153c4f2 from projected-3283 started at 2020-05-25 12:10:04 +0000 UTC (1 container statuses recorded)
May 25 12:10:12.883: INFO: 	Container client-container ready: true, restart count 0
May 25 12:10:12.883: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May 25 12:10:12.887: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 12:10:12.888: INFO: 	Container kindnet-cni ready: true, restart count 0
May 25 12:10:12.888: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 25 12:10:12.888: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.1612443507274925], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.16124435094a90a4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:10:13.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2200" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":275,"completed":267,"skipped":4587,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:10:13.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-7a570282-ce57-42e5-96d1-1c2dc54ef0c9
STEP: Creating a pod to test consume configMaps
May 25 12:10:14.030: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9b167a1e-add0-49d3-967b-af810724116a" in namespace "projected-7666" to be "Succeeded or Failed"
May 25 12:10:14.055: INFO: Pod "pod-projected-configmaps-9b167a1e-add0-49d3-967b-af810724116a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.541435ms
May 25 12:10:16.059: INFO: Pod "pod-projected-configmaps-9b167a1e-add0-49d3-967b-af810724116a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02924242s
May 25 12:10:18.314: INFO: Pod "pod-projected-configmaps-9b167a1e-add0-49d3-967b-af810724116a": Phase="Running", Reason="", readiness=true. Elapsed: 4.283466659s
May 25 12:10:20.318: INFO: Pod "pod-projected-configmaps-9b167a1e-add0-49d3-967b-af810724116a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.287781454s
STEP: Saw pod success
May 25 12:10:20.318: INFO: Pod "pod-projected-configmaps-9b167a1e-add0-49d3-967b-af810724116a" satisfied condition "Succeeded or Failed"
May 25 12:10:20.321: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-9b167a1e-add0-49d3-967b-af810724116a container projected-configmap-volume-test: 
STEP: delete the pod
May 25 12:10:20.377: INFO: Waiting for pod pod-projected-configmaps-9b167a1e-add0-49d3-967b-af810724116a to disappear
May 25 12:10:20.473: INFO: Pod pod-projected-configmaps-9b167a1e-add0-49d3-967b-af810724116a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:10:20.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7666" for this suite.

• [SLOW TEST:6.565 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4599,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:10:20.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0525 12:10:22.910316       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 25 12:10:22.910: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:10:22.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6814" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":269,"skipped":4632,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:10:22.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service endpoint-test2 in namespace services-1291
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1291 to expose endpoints map[]
May 25 12:10:24.158: INFO: successfully validated that service endpoint-test2 in namespace services-1291 exposes endpoints map[] (186.532521ms elapsed)
STEP: Creating pod pod1 in namespace services-1291
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1291 to expose endpoints map[pod1:[80]]
May 25 12:10:29.544: INFO: successfully validated that service endpoint-test2 in namespace services-1291 exposes endpoints map[pod1:[80]] (5.05043788s elapsed)
STEP: Creating pod pod2 in namespace services-1291
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1291 to expose endpoints map[pod1:[80] pod2:[80]]
May 25 12:10:33.240: INFO: successfully validated that service endpoint-test2 in namespace services-1291 exposes endpoints map[pod1:[80] pod2:[80]] (3.441877645s elapsed)
STEP: Deleting pod pod1 in namespace services-1291
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1291 to expose endpoints map[pod2:[80]]
May 25 12:10:34.440: INFO: successfully validated that service endpoint-test2 in namespace services-1291 exposes endpoints map[pod2:[80]] (1.195014375s elapsed)
STEP: Deleting pod pod2 in namespace services-1291
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1291 to expose endpoints map[]
May 25 12:10:35.680: INFO: successfully validated that service endpoint-test2 in namespace services-1291 exposes endpoints map[] (1.147246564s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:10:35.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1291" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:13.420 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":275,"completed":270,"skipped":4641,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:10:36.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0525 12:11:17.136529       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 25 12:11:17.136: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:11:17.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6228" for this suite.

• [SLOW TEST:40.812 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":271,"skipped":4674,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:11:17.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 12:11:17.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
May 25 12:11:20.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1522 create -f -'
May 25 12:11:26.796: INFO: stderr: ""
May 25 12:11:26.796: INFO: stdout: "e2e-test-crd-publish-openapi-2655-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
May 25 12:11:26.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1522 delete e2e-test-crd-publish-openapi-2655-crds test-cr'
May 25 12:11:27.509: INFO: stderr: ""
May 25 12:11:27.509: INFO: stdout: "e2e-test-crd-publish-openapi-2655-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
May 25 12:11:27.509: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1522 apply -f -'
May 25 12:11:28.471: INFO: stderr: ""
May 25 12:11:28.471: INFO: stdout: "e2e-test-crd-publish-openapi-2655-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
May 25 12:11:28.471: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1522 delete e2e-test-crd-publish-openapi-2655-crds test-cr'
May 25 12:11:28.782: INFO: stderr: ""
May 25 12:11:28.782: INFO: stdout: "e2e-test-crd-publish-openapi-2655-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
May 25 12:11:28.782: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2655-crds'
May 25 12:11:29.394: INFO: stderr: ""
May 25 12:11:29.394: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2655-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:11:32.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1522" for this suite.

• [SLOW TEST:15.176 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":272,"skipped":4675,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:11:32.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 25 12:11:32.438: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8642'
May 25 12:11:32.757: INFO: stderr: ""
May 25 12:11:32.757: INFO: stdout: "replicationcontroller/agnhost-master created\n"
May 25 12:11:32.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8642'
May 25 12:11:33.089: INFO: stderr: ""
May 25 12:11:33.089: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
May 25 12:11:34.094: INFO: Selector matched 1 pods for map[app:agnhost]
May 25 12:11:34.094: INFO: Found 0 / 1
May 25 12:11:35.094: INFO: Selector matched 1 pods for map[app:agnhost]
May 25 12:11:35.094: INFO: Found 0 / 1
May 25 12:11:36.096: INFO: Selector matched 1 pods for map[app:agnhost]
May 25 12:11:36.097: INFO: Found 0 / 1
May 25 12:11:37.093: INFO: Selector matched 1 pods for map[app:agnhost]
May 25 12:11:37.094: INFO: Found 1 / 1
May 25 12:11:37.094: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
May 25 12:11:37.096: INFO: Selector matched 1 pods for map[app:agnhost]
May 25 12:11:37.096: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May 25 12:11:37.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe pod agnhost-master-nmwng --namespace=kubectl-8642'
May 25 12:11:37.222: INFO: stderr: ""
May 25 12:11:37.222: INFO: stdout: "Name:         agnhost-master-nmwng\nNamespace:    kubectl-8642\nPriority:     0\nNode:         kali-worker2/172.17.0.18\nStart Time:   Mon, 25 May 2020 12:11:32 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.1.9\nIPs:\n  IP:           10.244.1.9\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://3cf93e1a589f631e1c5c9f731615f2320e7b5b41008144edd24bf6d4c9d8eb38\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 25 May 2020 12:11:35 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kgm9h (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-kgm9h:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-kgm9h\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                   Message\n  ----    ------     ----       ----                   -------\n  Normal  Scheduled    default-scheduler      Successfully assigned kubectl-8642/agnhost-master-nmwng to kali-worker2\n  Normal  Pulled     3s         kubelet, kali-worker2  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    2s         kubelet, kali-worker2  Created container agnhost-master\n  Normal  Started    2s         kubelet, kali-worker2  Started container agnhost-master\n"
May 25 12:11:37.222: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-8642'
May 25 12:11:37.351: INFO: stderr: ""
May 25 12:11:37.351: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-8642\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: agnhost-master-nmwng\n"
May 25 12:11:37.351: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-8642'
May 25 12:11:37.448: INFO: stderr: ""
May 25 12:11:37.448: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-8642\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.97.140.28\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.9:6379\nSession Affinity:  None\nEvents:            \n"
May 25 12:11:37.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe node kali-control-plane'
May 25 12:11:37.587: INFO: stderr: ""
May 25 12:11:37.587: INFO: stdout: "Name:               kali-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kali-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Wed, 29 Apr 2020 09:30:59 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kali-control-plane\n  AcquireTime:     \n  RenewTime:       Mon, 25 May 2020 12:11:28 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Mon, 25 May 2020 12:10:28 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Mon, 25 May 2020 12:10:28 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Mon, 25 May 2020 12:10:28 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Mon, 25 May 2020 12:10:28 +0000   Wed, 29 Apr 2020 09:31:34 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.17.0.19\n  Hostname:    kali-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 2146cf85bed648199604ab2e0e9ac609\n  System UUID:                e83c0db4-babe-44fc-9dad-b5eeae6d23fd\n  Boot ID:                    ca2aa731-f890-4956-92a1-ff8c7560d571\n  Kernel Version:             4.15.0-88-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.18.2\n  Kube-Proxy Version:         v1.18.2\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-rvq2k                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     26d\n  kube-system                 coredns-66bff467f8-w6zxd                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     26d\n  kube-system                 etcd-kali-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         26d\n  kube-system                 kindnet-65djz                                 100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      26d\n  kube-system                 kube-apiserver-kali-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         26d\n  kube-system                 kube-controller-manager-kali-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         26d\n  kube-system                 kube-proxy-pnhtq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26d\n  kube-system                 kube-scheduler-kali-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         26d\n  local-path-storage          local-path-provisioner-bd4bb6b75-6l9ph        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:              \n"
May 25 12:11:37.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe namespace kubectl-8642'
May 25 12:11:37.707: INFO: stderr: ""
May 25 12:11:37.707: INFO: stdout: "Name:         kubectl-8642\nLabels:       e2e-framework=kubectl\n              e2e-run=bdb4397a-25df-41e0-9572-afa1e212f873\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:11:37.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8642" for this suite.

• [SLOW TEST:5.388 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":275,"completed":273,"skipped":4678,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:11:37.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-5751
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
May 25 12:11:37.830: INFO: Found 0 stateful pods, waiting for 3
May 25 12:11:47.882: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May 25 12:11:47.882: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May 25 12:11:47.882: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
May 25 12:11:57.834: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May 25 12:11:57.834: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May 25 12:11:57.834: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
May 25 12:11:57.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5751 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 25 12:11:58.096: INFO: stderr: "I0525 12:11:57.983654    3905 log.go:172] (0xc000a87550) (0xc0009766e0) Create stream\nI0525 12:11:57.983719    3905 log.go:172] (0xc000a87550) (0xc0009766e0) Stream added, broadcasting: 1\nI0525 12:11:57.988214    3905 log.go:172] (0xc000a87550) Reply frame received for 1\nI0525 12:11:57.988258    3905 log.go:172] (0xc000a87550) (0xc00073d220) Create stream\nI0525 12:11:57.988271    3905 log.go:172] (0xc000a87550) (0xc00073d220) Stream added, broadcasting: 3\nI0525 12:11:57.990561    3905 log.go:172] (0xc000a87550) Reply frame received for 3\nI0525 12:11:57.990618    3905 log.go:172] (0xc000a87550) (0xc00073d2c0) Create stream\nI0525 12:11:57.990634    3905 log.go:172] (0xc000a87550) (0xc00073d2c0) Stream added, broadcasting: 5\nI0525 12:11:57.991957    3905 log.go:172] (0xc000a87550) Reply frame received for 5\nI0525 12:11:58.050850    3905 log.go:172] (0xc000a87550) Data frame received for 5\nI0525 12:11:58.050881    3905 log.go:172] (0xc00073d2c0) (5) Data frame handling\nI0525 12:11:58.050900    3905 log.go:172] (0xc00073d2c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 12:11:58.088623    3905 log.go:172] (0xc000a87550) Data frame received for 5\nI0525 12:11:58.088679    3905 log.go:172] (0xc00073d2c0) (5) Data frame handling\nI0525 12:11:58.088709    3905 log.go:172] (0xc000a87550) Data frame received for 3\nI0525 12:11:58.088724    3905 log.go:172] (0xc00073d220) (3) Data frame handling\nI0525 12:11:58.088741    3905 log.go:172] (0xc00073d220) (3) Data frame sent\nI0525 12:11:58.088762    3905 log.go:172] (0xc000a87550) Data frame received for 3\nI0525 12:11:58.088776    3905 log.go:172] (0xc00073d220) (3) Data frame handling\nI0525 12:11:58.090661    3905 log.go:172] (0xc000a87550) Data frame received for 1\nI0525 12:11:58.090713    3905 log.go:172] (0xc0009766e0) (1) Data frame handling\nI0525 12:11:58.090742    3905 log.go:172] (0xc0009766e0) (1) Data frame sent\nI0525 12:11:58.090783    3905 log.go:172] (0xc000a87550) (0xc0009766e0) Stream removed, broadcasting: 1\nI0525 12:11:58.090813    3905 log.go:172] (0xc000a87550) Go away received\nI0525 12:11:58.091406    3905 log.go:172] (0xc000a87550) (0xc0009766e0) Stream removed, broadcasting: 1\nI0525 12:11:58.091455    3905 log.go:172] (0xc000a87550) (0xc00073d220) Stream removed, broadcasting: 3\nI0525 12:11:58.091492    3905 log.go:172] (0xc000a87550) (0xc00073d2c0) Stream removed, broadcasting: 5\n"
May 25 12:11:58.096: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 25 12:11:58.096: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
May 25 12:12:08.126: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
May 25 12:12:18.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5751 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:12:18.442: INFO: stderr: "I0525 12:12:18.362423    3925 log.go:172] (0xc000994840) (0xc0006b5720) Create stream\nI0525 12:12:18.362490    3925 log.go:172] (0xc000994840) (0xc0006b5720) Stream added, broadcasting: 1\nI0525 12:12:18.364668    3925 log.go:172] (0xc000994840) Reply frame received for 1\nI0525 12:12:18.364710    3925 log.go:172] (0xc000994840) (0xc000310be0) Create stream\nI0525 12:12:18.364722    3925 log.go:172] (0xc000994840) (0xc000310be0) Stream added, broadcasting: 3\nI0525 12:12:18.365605    3925 log.go:172] (0xc000994840) Reply frame received for 3\nI0525 12:12:18.365630    3925 log.go:172] (0xc000994840) (0xc0006b57c0) Create stream\nI0525 12:12:18.365637    3925 log.go:172] (0xc000994840) (0xc0006b57c0) Stream added, broadcasting: 5\nI0525 12:12:18.366406    3925 log.go:172] (0xc000994840) Reply frame received for 5\nI0525 12:12:18.433312    3925 log.go:172] (0xc000994840) Data frame received for 3\nI0525 12:12:18.433333    3925 log.go:172] (0xc000310be0) (3) Data frame handling\nI0525 12:12:18.433340    3925 log.go:172] (0xc000310be0) (3) Data frame sent\nI0525 12:12:18.433346    3925 log.go:172] (0xc000994840) Data frame received for 3\nI0525 12:12:18.433350    3925 log.go:172] (0xc000310be0) (3) Data frame handling\nI0525 12:12:18.433410    3925 log.go:172] (0xc000994840) Data frame received for 5\nI0525 12:12:18.433419    3925 log.go:172] (0xc0006b57c0) (5) Data frame handling\nI0525 12:12:18.433425    3925 log.go:172] (0xc0006b57c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 12:12:18.433861    3925 log.go:172] (0xc000994840) Data frame received for 5\nI0525 12:12:18.433894    3925 log.go:172] (0xc0006b57c0) (5) Data frame handling\nI0525 12:12:18.435455    3925 log.go:172] (0xc000994840) Data frame received for 1\nI0525 12:12:18.435479    3925 log.go:172] (0xc0006b5720) (1) Data frame handling\nI0525 12:12:18.435490    3925 log.go:172] (0xc0006b5720) (1) Data frame sent\nI0525 12:12:18.435505    3925 log.go:172] (0xc000994840) (0xc0006b5720) Stream removed, broadcasting: 1\nI0525 12:12:18.435703    3925 log.go:172] (0xc000994840) Go away received\nI0525 12:12:18.435936    3925 log.go:172] (0xc000994840) (0xc0006b5720) Stream removed, broadcasting: 1\nI0525 12:12:18.435959    3925 log.go:172] (0xc000994840) (0xc000310be0) Stream removed, broadcasting: 3\nI0525 12:12:18.435972    3925 log.go:172] (0xc000994840) (0xc0006b57c0) Stream removed, broadcasting: 5\n"
May 25 12:12:18.442: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 25 12:12:18.442: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 25 12:12:38.484: INFO: Waiting for StatefulSet statefulset-5751/ss2 to complete update
STEP: Rolling back to a previous revision
May 25 12:12:48.491: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5751 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 25 12:12:48.807: INFO: stderr: "I0525 12:12:48.629874    3946 log.go:172] (0xc00003a420) (0xc000b92000) Create stream\nI0525 12:12:48.629966    3946 log.go:172] (0xc00003a420) (0xc000b92000) Stream added, broadcasting: 1\nI0525 12:12:48.633488    3946 log.go:172] (0xc00003a420) Reply frame received for 1\nI0525 12:12:48.633569    3946 log.go:172] (0xc00003a420) (0xc00069c000) Create stream\nI0525 12:12:48.633596    3946 log.go:172] (0xc00003a420) (0xc00069c000) Stream added, broadcasting: 3\nI0525 12:12:48.634670    3946 log.go:172] (0xc00003a420) Reply frame received for 3\nI0525 12:12:48.634707    3946 log.go:172] (0xc00003a420) (0xc00069c0a0) Create stream\nI0525 12:12:48.634719    3946 log.go:172] (0xc00003a420) (0xc00069c0a0) Stream added, broadcasting: 5\nI0525 12:12:48.635745    3946 log.go:172] (0xc00003a420) Reply frame received for 5\nI0525 12:12:48.740887    3946 log.go:172] (0xc00003a420) Data frame received for 5\nI0525 12:12:48.740928    3946 log.go:172] (0xc00069c0a0) (5) Data frame handling\nI0525 12:12:48.740964    3946 log.go:172] (0xc00069c0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 12:12:48.797719    3946 log.go:172] (0xc00003a420) Data frame received for 3\nI0525 12:12:48.797744    3946 log.go:172] (0xc00069c000) (3) Data frame handling\nI0525 12:12:48.797757    3946 log.go:172] (0xc00069c000) (3) Data frame sent\nI0525 12:12:48.797766    3946 log.go:172] (0xc00003a420) Data frame received for 3\nI0525 12:12:48.797772    3946 log.go:172] (0xc00069c000) (3) Data frame handling\nI0525 12:12:48.798129    3946 log.go:172] (0xc00003a420) Data frame received for 5\nI0525 12:12:48.798143    3946 log.go:172] (0xc00069c0a0) (5) Data frame handling\nI0525 12:12:48.800402    3946 log.go:172] (0xc00003a420) Data frame received for 1\nI0525 12:12:48.800427    3946 log.go:172] (0xc000b92000) (1) Data frame handling\nI0525 12:12:48.800447    3946 log.go:172] (0xc000b92000) (1) Data frame sent\nI0525 12:12:48.800469    3946 log.go:172] (0xc00003a420) (0xc000b92000) Stream removed, broadcasting: 1\nI0525 12:12:48.800503    3946 log.go:172] (0xc00003a420) Go away received\nI0525 12:12:48.800889    3946 log.go:172] (0xc00003a420) (0xc000b92000) Stream removed, broadcasting: 1\nI0525 12:12:48.800911    3946 log.go:172] (0xc00003a420) (0xc00069c000) Stream removed, broadcasting: 3\nI0525 12:12:48.800923    3946 log.go:172] (0xc00003a420) (0xc00069c0a0) Stream removed, broadcasting: 5\n"
May 25 12:12:48.807: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 25 12:12:48.807: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 25 12:12:58.841: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
May 25 12:13:08.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5751 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 25 12:13:09.160: INFO: stderr: "I0525 12:13:09.083924    3967 log.go:172] (0xc000a10000) (0xc000a00000) Create stream\nI0525 12:13:09.083983    3967 log.go:172] (0xc000a10000) (0xc000a00000) Stream added, broadcasting: 1\nI0525 12:13:09.087123    3967 log.go:172] (0xc000a10000) Reply frame received for 1\nI0525 12:13:09.087191    3967 log.go:172] (0xc000a10000) (0xc0008e8000) Create stream\nI0525 12:13:09.087207    3967 log.go:172] (0xc000a10000) (0xc0008e8000) Stream added, broadcasting: 3\nI0525 12:13:09.088541    3967 log.go:172] (0xc000a10000) Reply frame received for 3\nI0525 12:13:09.088600    3967 log.go:172] (0xc000a10000) (0xc000a000a0) Create stream\nI0525 12:13:09.088622    3967 log.go:172] (0xc000a10000) (0xc000a000a0) Stream added, broadcasting: 5\nI0525 12:13:09.090211    3967 log.go:172] (0xc000a10000) Reply frame received for 5\nI0525 12:13:09.152766    3967 log.go:172] (0xc000a10000) Data frame received for 3\nI0525 12:13:09.152804    3967 log.go:172] (0xc0008e8000) (3) Data frame handling\nI0525 12:13:09.152847    3967 log.go:172] (0xc000a10000) Data frame received for 5\nI0525 12:13:09.152891    3967 log.go:172] (0xc000a000a0) (5) Data frame handling\nI0525 12:13:09.152913    3967 log.go:172] (0xc000a000a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 12:13:09.152936    3967 log.go:172] (0xc0008e8000) (3) Data frame sent\nI0525 12:13:09.152972    3967 log.go:172] (0xc000a10000) Data frame received for 5\nI0525 12:13:09.152993    3967 log.go:172] (0xc000a000a0) (5) Data frame handling\nI0525 12:13:09.153274    3967 log.go:172] (0xc000a10000) Data frame received for 3\nI0525 12:13:09.153321    3967 log.go:172] (0xc0008e8000) (3) Data frame handling\nI0525 12:13:09.154377    3967 log.go:172] (0xc000a10000) Data frame received for 1\nI0525 12:13:09.154394    3967 log.go:172] (0xc000a00000) (1) Data frame handling\nI0525 12:13:09.154402    3967 log.go:172] (0xc000a00000) (1) Data frame sent\nI0525 12:13:09.154417    3967 log.go:172] (0xc000a10000) (0xc000a00000) Stream removed, broadcasting: 1\nI0525 12:13:09.154693    3967 log.go:172] (0xc000a10000) (0xc000a00000) Stream removed, broadcasting: 1\nI0525 12:13:09.154706    3967 log.go:172] (0xc000a10000) (0xc0008e8000) Stream removed, broadcasting: 3\nI0525 12:13:09.154712    3967 log.go:172] (0xc000a10000) (0xc000a000a0) Stream removed, broadcasting: 5\n"
May 25 12:13:09.160: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 25 12:13:09.160: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 25 12:13:19.181: INFO: Waiting for StatefulSet statefulset-5751/ss2 to complete update
May 25 12:13:19.181: INFO: Waiting for Pod statefulset-5751/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May 25 12:13:19.181: INFO: Waiting for Pod statefulset-5751/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May 25 12:13:19.181: INFO: Waiting for Pod statefulset-5751/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May 25 12:13:29.189: INFO: Waiting for StatefulSet statefulset-5751/ss2 to complete update
May 25 12:13:29.190: INFO: Waiting for Pod statefulset-5751/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May 25 12:13:29.190: INFO: Waiting for Pod statefulset-5751/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May 25 12:13:39.191: INFO: Waiting for StatefulSet statefulset-5751/ss2 to complete update
May 25 12:13:39.191: INFO: Waiting for Pod statefulset-5751/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May 25 12:13:49.190: INFO: Deleting all statefulset in ns statefulset-5751
May 25 12:13:49.193: INFO: Scaling statefulset ss2 to 0
May 25 12:14:29.495: INFO: Waiting for statefulset status.replicas updated to 0
May 25 12:14:29.508: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:14:29.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5751" for this suite.

• [SLOW TEST:171.857 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":274,"skipped":4695,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 25 12:14:29.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
May 25 12:14:39.830: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May 25 12:14:39.837: INFO: Pod pod-with-prestop-http-hook still exists
May 25 12:14:41.837: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May 25 12:14:41.842: INFO: Pod pod-with-prestop-http-hook still exists
May 25 12:14:43.837: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May 25 12:14:43.843: INFO: Pod pod-with-prestop-http-hook still exists
May 25 12:14:45.837: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May 25 12:14:45.842: INFO: Pod pod-with-prestop-http-hook still exists
May 25 12:14:47.837: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May 25 12:14:47.842: INFO: Pod pod-with-prestop-http-hook still exists
May 25 12:14:49.837: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May 25 12:14:49.843: INFO: Pod pod-with-prestop-http-hook still exists
May 25 12:14:51.837: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May 25 12:14:51.843: INFO: Pod pod-with-prestop-http-hook still exists
May 25 12:14:53.837: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May 25 12:14:53.841: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 25 12:14:53.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9636" for this suite.

• [SLOW TEST:24.316 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":275,"skipped":4705,"failed":0}
SSSSSSSSSSSSMay 25 12:14:53.890: INFO: Running AfterSuite actions on all nodes
May 25 12:14:53.890: INFO: Running AfterSuite actions on node 1
May 25 12:14:53.890: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0}

Ran 275 of 4992 Specs in 5104.804 seconds
SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped
PASS